RE: Replication factor affecting write performance

2014-09-01 Thread Vimal Jain
Mike,
Plz send email to user-unsubscr...@hbase.apache.org
Dont spam entire mailing list.

Unsubscribe





*From:* Stanley Shi [mailto:s...@pivotal.io]
*Sent:* Monday, September 01, 2014 7:31 PM
*To:* user@hadoop.apache.org
*Cc:* Julien Lehuen; Tyler McDougall
*Subject:* Re: Replication factor affecting write performance



What's the network setup and topology?

Also, the size of the cluster?



On Mon, Sep 1, 2014 at 4:10 PM, Laurens Bronwasser 
laurens.bronwas...@imc.nl wrote:

And now with the right label on the Y-axis.





*From: *Microsoft Office User laurens.bronwas...@imc.nl
*Date: *Monday, September 1, 2014 at 9:56 AM
*To: *user@hadoop.apache.org user@hadoop.apache.org
*Cc: *Julien Lehuen julien.leh...@imc.nl, Tyler McDougall 
tyler.mcdoug...@imc.nl
*Subject: *Replication factor affecting write performance



Hi,

We have a setup with two clusters.

On cluster shows a very strong degradation when we increase the replication
factor.

Another cluster shows hardly any degradation with increased replication
factor.



Any idea how to find out the bottleneck in the slower cluster?






--


The information in this e-mail is intended only for the person or entity to
which it is addressed.

It may contain confidential and /or privileged material. If someone other
than the intended recipient should receive this e-mail, he / she shall not
be entitled to read, disseminate, disclose or duplicate it.

If you receive this e-mail unintentionally, please inform us immediately by
reply and then delete it from your system. Although this information has
been compiled with great care, neither IMC Financial Markets  Asset
Management nor any of its related entities shall accept any responsibility
for any errors, omissions or other inaccuracies in this information or for
the consequences thereof, nor shall it be bound in any way by the contents
of this e-mail or its attachments. In the event of incomplete or incorrect
transmission, please return the e-mail to the sender and permanently delete
this message and any attachments.

Messages and attachments are scanned for all known viruses. Always scan
attachments before opening them.





-- 

Regards,

*Stanley Shi,*


Difference between different tar

2014-07-21 Thread Vimal Jain
Hi,
On download page of hadoop ( e.g.
http://apache.arvixe.com/hadoop/common/stable1/ ) , I see lots of tars.
Whats the difference between hadoop-1.2.1-bin.tar.gz
http://apache.arvixe.com/hadoop/common/stable1/hadoop-1.2.1-bin.tar.gz
 and hadoop-1.2.1.tar.gz
http://apache.arvixe.com/hadoop/common/stable1/hadoop-1.2.1.tar.gz ( the
one without bin) ?
Which one should i use ?
For Hbase , i am using hbase-0.98.3-hadoop1-bin.tar.gz
http://apache.arvixe.com/hbase/stable/hbase-0.98.3-hadoop1-bin.tar.gz.

-- 
Thanks and Regards,
Vimal Jain


Re: unsubscribe

2014-03-18 Thread Vimal Jain
Please send an email to user-unsubscr...@hadoop.apache.org.


On Tue, Mar 18, 2014 at 6:57 PM, Rananavare, Sunil 
sunil.rananav...@unify.com wrote:

   Please remove me from the user distribution list.

 Thanks.




-- 
Thanks and Regards,
Vimal Jain


Size of data directory same on all nodes in cluster

2014-03-12 Thread Vimal Jain
Hi,
I have setup 2 node Hbase cluster on top of 2 node HDFS cluster.
When i perform du -sh command on data directory ( where hadoop stores
data ) on both machines , its shows the same size.
As per my understanding , of entire data half of the data is stored in one
machine and other half on other machine.
Please help.

-- 
Thanks and Regards,
Vimal Jain


Warning in secondary namenode log

2014-03-06 Thread Vimal Jain
Hi,
I am setting up 2 node hadoop cluster ( 1.2.1)
After formatting the FS and starting namenode,datanode and
secondarynamenode , i am getting below warning in SecondaryNameNode logs.

*WARN org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Checkpoint
Period   :3600 secs (60 min)*

Please help to debug this.
-- 
Thanks and Regards,
Vimal Jain


Hadoop and Hbase setup

2014-02-13 Thread Vimal Jain
Hi,
I am planning to install Hadoop and Hbase in a 2 node cluster.
I have Chosen 0.94.16 for Hbase ( the current stable version ).
I am confused about which hadoop version to choose from.
I see on Hadoop's download page , there are 2 stable series, one is 1.X and
other is 2.X series.
Which one should i use ?
What are major differences between these two ?

-- 
Thanks and Regards,
Vimal Jain


Copying data from one Hbase cluster to Another Hbase cluster

2014-02-13 Thread Vimal Jain
Hi,
I have Hbase and Hadoop setup in pseudo distributed mode in production.
Now i am planning to move from pseudo distributed mode to fully distributed
mode ( 2 node cluster).
My existing Hbase and Hadoop version are 1.1.2  and  0.94.7.
And i am planning to have full distributed mode with Hbase version 0.94.16
and Hadoop version ( either 1.X or 2.X , not yet decided ).

What are different ways to copy data from existing setup ( pseudo
distributed mode ) to this new setup ( 2 node fully distributed mode).

Please help.

-- 
Thanks and Regards,
Vimal Jain


Exception in data node log

2014-01-31 Thread Vimal Jain
Hi,
I have set up hbase in pseudo distributed mode.
I keep on getting below exceptions in data node log.
Is it a problem ?

( Hadoop version - 1.1.2 , Hbase version - 0.94.7 )

Please help.


java.net.SocketTimeoutException: 48 millis timeout while waiting for
channel to be ready for write. ch :
java.nio.channels.SocketChannel[connected local=/192.168.20.30:50010remote=/
192.168.20.30:38188]
at
org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
at
org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
at
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
at java.lang.Thread.run(Thread.java:662)

2014-01-31 00:10:28,951 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
192.168.20.30:50010,
storageID=DS-1816106352-192.168.20.30-50010-1369314076237, infoPort=50075,
ipcPort=50020):DataXceiver
java.net.SocketTimeoutException: 48 millis timeout while waiting for
channel to be ready for write. ch :
java.nio.channels.SocketChannel[connected local=/192.168.20.30:50010remote=/
192.168.20.30:38188]
at
org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
at
org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
at
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
at java.lang.Thread.run(Thread.java:662)

-- 
Thanks and Regards,
Vimal Jain


Re: High Full GC count for Region server

2013-10-31 Thread Vimal Jain
Hi,
Can anyone please reply to the above query ?


On Tue, Oct 29, 2013 at 10:48 AM, Vimal Jain vkj...@gmail.com wrote:

 Hi,
 Here is my analysis of this problem.Please correct me if i wrong somewhere.
 I have assigned 2 GB to region server process.I think its sufficient
 enough to handle around 9GB of data.
 I have not changed much of the parameters , especially memstore size which
 is 128 GB for 0.94.7 by default.
 Also as per my understanding , each col-family has one memstore associated
 with it.So my memstores are taking 128*3 = 384 MB ( I have 3 column
 families).
 So i think i should reduce memstore size to something like 32/64 MB so
 that data is flushed to disk at higher frequency then current
 frequency.This will save some memory.
 Is there any other parameter other then memstore size which affects memory
 utilization.

 Also I am getting below exceptions in data node log and region server log
 every day.Is it due to long GC pauses ?

 Data node logs :-

 hadoop-hadoop-datanode-woody.log:2013-10-29 00:12:13,127 WARN
 org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
 192.168.20.30:5001
 0, storageID=DS-1816106352-192.168.20.30-50010-1369314076237,
 infoPort=50075, ipcPort=50020):Got exception while serving
 blk_-560908881317618221_58058
  to /192.168.20.30:
 hadoop-hadoop-datanode-woody.log:java.net.SocketTimeoutException: 48
 millis timeout while waiting for channel to be ready for write. ch :
 java.nio
 .channels.SocketChannel[connected local=/192.168.20.30:50010 remote=/
 192.168.20.30:39413]
 hadoop-hadoop-datanode-woody.log:2013-10-29 00:12:13,127 ERROR
 org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
 192.168.20.30:500

 10, storageID=DS-1816106352-192.168.20.30-50010-1369314076237,
 infoPort=50075, ipcPort=50020):DataXceiver
 hadoop-hadoop-datanode-woody.log:java.net.SocketTimeoutException: 48
 millis timeout while waiting for channel to be ready for write. ch :
 java.nio
 .channels.SocketChannel[connected local=/192.168.20.30:50010 remote=/
 192.168.20.30:39413]


 Region server logs :-

 hbase-hadoop-regionserver-woody.log:2013-10-29 01:01:16,475 WARN
 org.apache.hadoop.ipc.HBaseServer: (responseTooSlow):
 {processingtimems:15827,call
 :multi(org.apache.hadoop.hbase.client.MultiAction@2918e464), rpc
 version=1, client version=29,
 methodsFingerPrint=-1368823753,client:192.168.20.

 31:50619,starttimems:1382988660645,queuetimems:0,class:HRegionServer,responsesize:0,method:multi}
 hbase-hadoop-regionserver-woody.log:2013-10-29 06:01:27,459 WARN
 org.apache.hadoop.ipc.HBaseServer: (operationTooSlow):
 {processingtimems:14745,cli
 ent:192.168.20.31:50908
 ,timeRange:[0,9223372036854775807],starttimems:1383006672707,responsesize:55,class:HRegionServer,table:event_da

 ta,cacheBlocks:true,families:{oinfo:[clubStatus]},row:1752869,queuetimems:1,method:get,totalColumns:1,maxVersions:1}





 On Mon, Oct 28, 2013 at 11:55 PM, Asaf Mesika asaf.mes...@gmail.comwrote:

 Check through HDFS UI that your cluster haven't reached maximum disk
 capacity

 On Thursday, October 24, 2013, Vimal Jain wrote:

  Hi Ted/Jean,
  Can you please help here ?
 
 
  On Tue, Oct 22, 2013 at 10:29 PM, Vimal Jain vkj...@gmail.com
 javascript:;
  wrote:
 
   Hi Ted,
   Yes i checked namenode and datanode logs and i found below exceptions
 in
   both the logs:-
  
   Name node :-
   java.io.IOException: File
  
 
 /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
   could only be replicated to 0 nodes, instead of 1
  
   java.io.IOException: Got blockReceived message from unregistered or
 dead
   node blk_-2949905629769882833_52274
  
   Data node :-
   48 millis timeout while waiting for channel to be ready for
 write. ch
   : java.nio.channels.SocketChannel[connected local=/
 192.168.20.30:50010
remote=/192.168.20.30:36188]
  
   ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:
   DatanodeRegistration(192.168.20.30:50010,
   storageID=DS-1816106352-192.168.20.30-50010-1369314076237,
  infoPort=50075,
   ipcPort=50020):DataXceiver
  
   java.io.EOFException: while trying to read 39309 bytes
  
  
   On Tue, Oct 22, 2013 at 10:19 PM, Ted Yu yuzhih...@gmail.com wrote:
  
   bq. java.io.IOException: File /hbase/event_data/
  
 4c3765c51911d6c67037a983d205a010/.tmp/bfaf8df33d5b4068825e3664d3e4b2b0
   could
   only be replicated to 0 nodes, instead of 1
  
   Have you checked Namenode / Datanode logs ?
   Looks like hdfs was not stable.
  
  
   On Tue, Oct 22, 2013 at 9:01 AM, Vimal Jain vkj...@gmail.com
 wrote:
  
HI Jean,
Thanks for your reply.
I have total 8 GB memory and distribution is as follows:-
   
Region server  - 2 GB
Master,Namenode,Datanode,Secondary Namenode,Zookepeer - 1 GB
OS - 1 GB
   
Please let me know if you need more information.
   
   
On Tue, Oct 22, 2013 at 8:15 PM, Jean-Marc Spaggiari 
jean-m...@spaggiari.org wrote:
   
 Hi Vimal,

 What are your

Exceptions in Data node log

2013-10-24 Thread Vimal Jain
Hi,
I am using Hadoop and Hbase in pseudo distributed mode.
I am using  Hadoop version - 1.1.2 , Hbase version - 0.94.7 .

I am receiving following error messages in data node log.

hadoop-hadoop-datanode-woody.log:2013-10-24 10:55:37,579 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
192.168.20.30:5001
0, storageID=DS-1816106352-192.168.20.30-50010-1369314076237,
infoPort=50075, ipcPort=50020):Got exception while serving
blk_4378636005274237256_55385
 to /192.168.20.30:
hadoop-hadoop-datanode-woody.log:java.net.SocketTimeoutException: 48
millis timeout while waiting for channel to be ready for write. ch :
java.nio
.channels.SocketChannel[connected local=/192.168.20.30:50010 remote=/
192.168.20.30:60739]
hadoop-hadoop-datanode-woody.log:2013-10-24 10:55:37,603 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
192.168.20.30:500
10, storageID=DS-1816106352-192.168.20.30-50010-1369314076237,
infoPort=50075, ipcPort=50020):DataXceiver
hadoop-hadoop-datanode-woody.log:java.net.SocketTimeoutException: 48
millis timeout while waiting for channel to be ready for write. ch :
java.nio
.channels.SocketChannel[connected local=/192.168.20.30:50010 remote=/
192.168.20.30:60739]


Please help in understanding the cause behind this.
-- 
Thanks and Regards,
Vimal Jain


Re: High Full GC count for Region server

2013-10-23 Thread Vimal Jain
Hi Ted/Jean,
Can you please help here ?


On Tue, Oct 22, 2013 at 10:29 PM, Vimal Jain vkj...@gmail.com wrote:

 Hi Ted,
 Yes i checked namenode and datanode logs and i found below exceptions in
 both the logs:-

 Name node :-
 java.io.IOException: File
 /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
 could only be replicated to 0 nodes, instead of 1

 java.io.IOException: Got blockReceived message from unregistered or dead
 node blk_-2949905629769882833_52274

 Data node :-
 48 millis timeout while waiting for channel to be ready for write. ch
 : java.nio.channels.SocketChannel[connected local=/192.168.20.30:50010
  remote=/192.168.20.30:36188]

 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:
 DatanodeRegistration(192.168.20.30:50010,
 storageID=DS-1816106352-192.168.20.30-50010-1369314076237, infoPort=50075,
 ipcPort=50020):DataXceiver

 java.io.EOFException: while trying to read 39309 bytes


 On Tue, Oct 22, 2013 at 10:19 PM, Ted Yu yuzhih...@gmail.com wrote:

 bq. java.io.IOException: File /hbase/event_data/
 4c3765c51911d6c67037a983d205a010/.tmp/bfaf8df33d5b4068825e3664d3e4b2b0
 could
 only be replicated to 0 nodes, instead of 1

 Have you checked Namenode / Datanode logs ?
 Looks like hdfs was not stable.


 On Tue, Oct 22, 2013 at 9:01 AM, Vimal Jain vkj...@gmail.com wrote:

  HI Jean,
  Thanks for your reply.
  I have total 8 GB memory and distribution is as follows:-
 
  Region server  - 2 GB
  Master,Namenode,Datanode,Secondary Namenode,Zookepeer - 1 GB
  OS - 1 GB
 
  Please let me know if you need more information.
 
 
  On Tue, Oct 22, 2013 at 8:15 PM, Jean-Marc Spaggiari 
  jean-m...@spaggiari.org wrote:
 
   Hi Vimal,
  
   What are your settings? Memory of the host, and memory allocated for
 the
   different HBase services?
  
   Thanks,
  
   JM
  
  
   2013/10/22 Vimal Jain vkj...@gmail.com
  
Hi,
I am running in Hbase in pseudo distributed mode. ( Hadoop version -
   1.1.2
, Hbase version - 0.94.7 )
I am getting few exceptions in both hadoop ( namenode , datanode)
 logs
   and
hbase(region server).
When i search for these exceptions on google , i concluded  that
  problem
   is
mainly due to large number of full GC in region server process.
   
I used jstat and found that there are total of 950 full GCs in span
 of
  4
days for region server process.Is this ok?
   
I am totally confused by number of exceptions i am getting.
Also i get below exceptions intermittently.
   
   
Region server:-
   
2013-10-22 12:00:26,627 WARN org.apache.hadoop.ipc.HBaseServer:
(responseTooSlow):
{processingtimems:15312,call:next(-6681408251916104762, 1000),
 rpc
version=1, client version=29,
  methodsFingerPrint=-1368823753,client:
192.168.20.31:48270
   
   
  
 
 ,starttimems:1382423411293,queuetimems:0,class:HRegionServer,responsesize:4808556,method:next}
2013-10-22 12:06:17,606 WARN org.apache.hadoop.ipc.HBaseServer:
(operationTooSlow): {processingtimems:14759,client:
192.168.20.31:48247
   
   
  
 
 ,timeRange:[0,9223372036854775807],starttimems:1382423762845,responsesize:61,class:HRegionServer,table:event_data,cacheBlocks:true,families:{ginfo:[netGainPool]},row:1629657,queuetimems:0,method:get,totalColumns:1,maxVersions:1}
   
2013-10-18 10:37:45,008 WARN org.apache.hadoop.hdfs.DFSClient:
   DataStreamer
Exception: org.apache.hadoop.ipc.RemoteException:
 java.io.IOException:
   File
   
   
  
 
 /hbase/event_data/4c3765c51911d6c67037a983d205a010/.tmp/bfaf8df33d5b4068825e3664d3e4b2b0
could only be replicated to 0 nodes, instead of 1
at
   
   
  
 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
   
Name node :-
java.io.IOException: File
   
   
  
 
 /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
could only be replicated to 0 nodes, instead of 1
   
java.io.IOException: Got blockReceived message from unregistered or
  dead
node blk_-2949905629769882833_52274
   
Data node :-
48 millis timeout while waiting for channel to be ready for
 write.
   ch :
java.nio.channels.SocketChannel[connected local=/
 192.168.20.30:50010
remote=/
192.168.20.30:36188]
   
ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:
DatanodeRegistration(
192.168.20.30:50010,
storageID=DS-1816106352-192.168.20.30-50010-1369314076237,
   infoPort=50075,
ipcPort=50020):DataXceiver
java.io.EOFException: while trying to read 39309 bytes
   
   
--
Thanks and Regards,
Vimal Jain
   
  
 
 
 
  --
  Thanks and Regards,
  Vimal Jain
 




 --
 Thanks and Regards,
 Vimal Jain




-- 
Thanks and Regards,
Vimal Jain


High Full GC count for Region server

2013-10-22 Thread Vimal Jain
Hi,
I am running in Hbase in pseudo distributed mode. ( Hadoop version - 1.1.2
, Hbase version - 0.94.7 )
I am getting few exceptions in both hadoop ( namenode , datanode) logs and
hbase(region server).
When i search for these exceptions on google , i concluded  that problem is
mainly due to large number of full GC in region server process.

I used jstat and found that there are total of 950 full GCs in span of 4
days for region server process.Is this ok?

I am totally confused by number of exceptions i am getting.
Also i get below exceptions intermittently.


Region server:-

2013-10-22 12:00:26,627 WARN org.apache.hadoop.ipc.HBaseServer:
(responseTooSlow):
{processingtimems:15312,call:next(-6681408251916104762, 1000), rpc
version=1, client version=29, methodsFingerPrint=-1368823753,client:
192.168.20.31:48270
,starttimems:1382423411293,queuetimems:0,class:HRegionServer,responsesize:4808556,method:next}
2013-10-22 12:06:17,606 WARN org.apache.hadoop.ipc.HBaseServer:
(operationTooSlow): {processingtimems:14759,client:192.168.20.31:48247
,timeRange:[0,9223372036854775807],starttimems:1382423762845,responsesize:61,class:HRegionServer,table:event_data,cacheBlocks:true,families:{ginfo:[netGainPool]},row:1629657,queuetimems:0,method:get,totalColumns:1,maxVersions:1}

2013-10-18 10:37:45,008 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer
Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/hbase/event_data/4c3765c51911d6c67037a983d205a010/.tmp/bfaf8df33d5b4068825e3664d3e4b2b0
could only be replicated to 0 nodes, instead of 1
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)

Name node :-
java.io.IOException: File
/hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
could only be replicated to 0 nodes, instead of 1

java.io.IOException: Got blockReceived message from unregistered or dead
node blk_-2949905629769882833_52274

Data node :-
48 millis timeout while waiting for channel to be ready for write. ch :
java.nio.channels.SocketChannel[connected local=/192.168.20.30:50010remote=/
192.168.20.30:36188]

ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
192.168.20.30:50010,
storageID=DS-1816106352-192.168.20.30-50010-1369314076237, infoPort=50075,
ipcPort=50020):DataXceiver
java.io.EOFException: while trying to read 39309 bytes


-- 
Thanks and Regards,
Vimal Jain


Re: Exceptions in Hadoop and Hbase log files

2013-10-20 Thread Vimal Jain
I will try that if i get them next time.
Could anyone please give the cause of this exceptions ?


On Fri, Oct 18, 2013 at 4:03 PM, divye sheth divs.sh...@gmail.com wrote:

 I would recommend you to stop the cluster and then start the daemons one by
 one.
 1. stop-dfs.sh
 2. hadoop-daemon.sh start namenode
 3. hadoop-daemon.sh start datanode

 This will show start up errors if any, also verify if the datanode is able
 to communicate with the namenode.

 Thanks
 Divye Sheth


 On Fri, Oct 18, 2013 at 3:51 PM, Vimal Jain vkj...@gmail.com wrote:

  Hi,
  I am running Hbase in pseudo distributed mode.( Hbase 0.94.7 and Hadoop
  1.1.2).
  I am getting certain exceptions in Hadoop's namenode and data node files
  which are :-
 
  Namenode :-
 
  2013-10-18 10:33:37,218 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
  NameSystem.heartbeatCheck: lost heartbeat from 192.168.20.30:50010
  2013-10-18 10:33:37,242 INFO org.apache.hadoop.net.NetworkTopology:
  Removing a node: /default-rack/192.168.20.30:50010
  2013-10-18 10:35:27,606 INFO
  org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
  transactions: 64 Total time for transactions(ms): 1Number
  of transactions batched in Syncs: 0 Number of syncs: 43 SyncTimes(ms): 86
  2013-10-18 10:35:27,614 ERROR
  org.apache.hadoop.security.UserGroupInformation:
 PriviledgedActionException
  as:hadoop cause:java.io.IOException: File /h
 
 
 base/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
  could only be replicated to 0 nodes, instead of 1
  2013-10-18 10:35:27,895 INFO org.apache.hadoop.ipc.Server: IPC Server
  handler 9 on 9000, call
  addBlock(/hbase/event_data/433b61f2a4ebff8f2e4b89890508a
  3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e,
  DFSClient_hb_rs_hbase.rummycircle.com,60020,1382012725057, null) from
  192.168.20.30:44990: error: java.io.I
  OException: File
 
 
 /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
  could only be replicated to 0 nodes, instead
   of 1
  java.io.IOException: File
 
 
 /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
  could only be replicated to 0 nodes
  , instead of 1
  at
 
 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
  at
 
 org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
  at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
  at
 
 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
  at java.lang.reflect.Method.invoke(Method.java:597)
  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:396)
  at
 
 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
 
 
  Data node :-
 
  2013-10-18 06:13:14,499 WARN
  org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
  192.168.20.30:50010, storageID=DS-1816106352-192.16
  8.20.30-50010-1369314076237, infoPort=50075, ipcPort=50020):Got exception
  while serving blk_-3215981820534544354_52215 to /192.168.20.30:
  java.net.SocketTimeoutException: 48 millis timeout while waiting for
  channel to be ready for write. ch :
  java.nio.channels.SocketChannel[connected
   local=/192.168.20.30:50010 remote=/192.168.20.30:36188]
  at
 
 
 org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
  at
 
 
 org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
  at
 
 
 org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
  at
 
 
 org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
  at
 
 
 org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
  at
 
 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
  at
 
 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
  at java.lang.Thread.run(Thread.java:662)
 
 
 
 
 
 
 
  --
  Thanks and Regards,
  Vimal Jain
 




-- 
Thanks and Regards,
Vimal Jain


Exceptions in Hadoop and Hbase log files

2013-10-18 Thread Vimal Jain
Hi,
I am running Hbase in pseudo distributed mode.( Hbase 0.94.7 and Hadoop
1.1.2).
I am getting certain exceptions in Hadoop's namenode and data node files
which are :-

Namenode :-

2013-10-18 10:33:37,218 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
NameSystem.heartbeatCheck: lost heartbeat from 192.168.20.30:50010
2013-10-18 10:33:37,242 INFO org.apache.hadoop.net.NetworkTopology:
Removing a node: /default-rack/192.168.20.30:50010
2013-10-18 10:35:27,606 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
transactions: 64 Total time for transactions(ms): 1Number
of transactions batched in Syncs: 0 Number of syncs: 43 SyncTimes(ms): 86
2013-10-18 10:35:27,614 ERROR
org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
as:hadoop cause:java.io.IOException: File /h
base/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
could only be replicated to 0 nodes, instead of 1
2013-10-18 10:35:27,895 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 9 on 9000, call
addBlock(/hbase/event_data/433b61f2a4ebff8f2e4b89890508a
3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e,
DFSClient_hb_rs_hbase.rummycircle.com,60020,1382012725057, null) from
192.168.20.30:44990: error: java.io.I
OException: File
/hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
could only be replicated to 0 nodes, instead
 of 1
java.io.IOException: File
/hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
could only be replicated to 0 nodes
, instead of 1
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)


Data node :-

2013-10-18 06:13:14,499 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
192.168.20.30:50010, storageID=DS-1816106352-192.16
8.20.30-50010-1369314076237, infoPort=50075, ipcPort=50020):Got exception
while serving blk_-3215981820534544354_52215 to /192.168.20.30:
java.net.SocketTimeoutException: 48 millis timeout while waiting for
channel to be ready for write. ch :
java.nio.channels.SocketChannel[connected
 local=/192.168.20.30:50010 remote=/192.168.20.30:36188]
at
org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
at
org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
at
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
at java.lang.Thread.run(Thread.java:662)







-- 
Thanks and Regards,
Vimal Jain


Re: Exceptions in Hadoop and Hbase log files

2013-10-18 Thread Vimal Jain
Some more exceptions in data node log -:

2013-10-18 10:37:53,693 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: Got blockRec
eived message from unregistered or dead node blk_-2949905629769882833_52274
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.blockReceived(FSNamesystem.java:4188)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.blockReceived(NameNode.java:1069)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)

at org.apache.hadoop.ipc.Client.call(Client.java:1107)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at sun.proxy.$Proxy5.blockReceived(Unknown Source)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:1006)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1527)
at java.lang.Thread.run(Thread.java:662)

2013-10-18 10:37:53,696 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: Got blockRec
eived message from unregistered or dead node blk_-2949905629769882833_52274
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.blockReceived(FSNamesystem.java:4188)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.blockReceived(NameNode.java:1069)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)

These exceptions keep on filling my disk space.
Let me know if you need more information.
Please help here.


On Fri, Oct 18, 2013 at 3:51 PM, Vimal Jain vkj...@gmail.com wrote:

 Hi,
 I am running Hbase in pseudo distributed mode.( Hbase 0.94.7 and Hadoop
 1.1.2).
 I am getting certain exceptions in Hadoop's namenode and data node files
 which are :-

 Namenode :-

 2013-10-18 10:33:37,218 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
 NameSystem.heartbeatCheck: lost heartbeat from 192.168.20.30:50010
 2013-10-18 10:33:37,242 INFO org.apache.hadoop.net.NetworkTopology:
 Removing a node: /default-rack/192.168.20.30:50010
 2013-10-18 10:35:27,606 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
 transactions: 64 Total time for transactions(ms): 1Number
 of transactions batched in Syncs: 0 Number of syncs: 43 SyncTimes(ms): 86
 2013-10-18 10:35:27,614 ERROR
 org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
 as:hadoop cause:java.io.IOException: File /h
 base/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
 could only be replicated to 0 nodes, instead of 1
 2013-10-18 10:35:27,895 INFO org.apache.hadoop.ipc.Server: IPC Server
 handler 9 on 9000, call
 addBlock(/hbase/event_data/433b61f2a4ebff8f2e4b89890508a
 3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e,
 DFSClient_hb_rs_hbase.rummycircle.com,60020,1382012725057, null) from
 192.168.20.30:44990: error: java.io.I
 OException: File
 /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
 could only be replicated to 0 nodes, instead
  of 1
 java.io.IOException: File
 /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
 could only be replicated to 0 nodes
 , instead of 1
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
 at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)


 Data node :-

 2013-10-18 06:13:14,499 WARN
 org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
 192.168.20.30:50010

Re: Exceptions in Name node and Data node logs

2013-08-15 Thread Vimal Jain
Can someone please help here ?


On Tue, Aug 13, 2013 at 9:28 PM, Vimal Jain vkj...@gmail.com wrote:

 Hi,
 As Jitendra pointed out , this issue was fixed in .20 version.
 I am using Hadoop 1.1.2 so why its occurring again ?
 Please help here.


 On Tue, Aug 13, 2013 at 2:56 PM, Vimal Jain vkj...@gmail.com wrote:

 Hi Jitendra,
 Thanks for your reply.
 Currently my hadoop/hbase is down in production as it had filled up the
 disk space with above exceptions in log files  and had to be brought down.
 Also i am using hadoop/hbase  in pseudo distributed mode , so there is
 only one node which hosts all 6 processes ( 3 from hadoop and 3 from hbase).



 On Tue, Aug 13, 2013 at 2:50 PM, Jitendra Yadav 
 jeetuyadav200...@gmail.com wrote:

 Hi,

 One of your DN is marked as dead because NN is not able to get heartbeat
 message from DN but NN still getting block information from dead node. This
 error is similar to a bug *HDFS-1250* reported 2 years back and fixed
 in 0.20 release.

 Can you please check the status of DN's in cluster.

 #bin/hadoop dfsadmin -report

 Thanks

 On Tue, Aug 13, 2013 at 1:53 PM, Vimal Jain vkj...@gmail.com wrote:

   Hi,
 I have configured Hadoop and Hbase in pseudo distributed mode.
 So far things were working fine , but suddenly i started receiving some
 exceptions in my namenode and datanode log files.
 It keeps repeating and thus fills up my disk space.
  Please help here.

 *Exception in data node :-*

 2013-07-31 19:39:51,094 WARN
 org.apache.hadoop.hdfs.server.datanode.DataNode:
 org.apache.hadoop.ipc.RemoteException: java.io.IOException: Got blockRec
 eived message from unregistered or dead node
 blk_-4787262105551508952_28369
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.blockReceived(FSNamesystem.java:4188)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.blockReceived(NameNode.java:1069)
 at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)

 at org.apache.hadoop.ipc.Client.call(Client.java:1107)
 at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
 at sun.proxy.$Proxy5.blockReceived(Unknown Source)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:1006)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1527)
 at java.lang.Thread.run(Thread.java:662)

 *Exception in name node :- *

 2013-07-31 19:39:50,671 WARN org.apache.hadoop.hdfs.StateChange: BLOCK*
 NameSystem.blockReceived: blk_-4787262105551508952_28369 is received from
 dead
  or unregistered node 192.168.20.30:50010
 2013-07-31 19:39:50,671 ERROR
 org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
 as:hadoop cause:java.io.IOException: Got blo
 ckReceived message from unregistered or dead node
 blk_-4787262105551508952_28369
 2013-07-31 19:39:50,671 INFO org.apache.hadoop.ipc.Server: IPC Server
 handler 5 on 9000, call blockReceived(DatanodeRegistration(
 192.168.20.30:50010,
 storageID=DS-1816106352-192.168.20.30-50010-1369314076237,
 infoPort=50075, ipcPort=50020),
 [Lorg.apache.hadoop.hdfs.protocol.Block;@64f2d559, [Ljava.l
 ang.String;@294f9d6) from 192.168.20.30:59764: error:
 java.io.IOException: Got blockReceived message from unregistered or dead
 node blk_-4787262105551
 508952_28369
 java.io.IOException: Got blockReceived message from unregistered or
 dead node blk_-4787262105551508952_28369
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.blockReceived(FSNamesystem.java:4188)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.blockReceived(NameNode.java:1069)
 at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)


 --
 Thanks and Regards,
 Vimal Jain





 --
 Thanks and Regards

Re: Exceptions in Name node and Data node logs

2013-08-13 Thread Vimal Jain
Sorry for not giving version details
I am using Hadoop version - 1.1.2  and Hbase version - 0.94.7


On Tue, Aug 13, 2013 at 1:53 PM, Vimal Jain vkj...@gmail.com wrote:

 Hi,
 I have configured Hadoop and Hbase in pseudo distributed mode.
 So far things were working fine , but suddenly i started receiving some
 exceptions in my namenode and datanode log files.
 It keeps repeating and thus fills up my disk space.
 Please help here.

 *Exception in data node :-*

 2013-07-31 19:39:51,094 WARN
 org.apache.hadoop.hdfs.server.datanode.DataNode:
 org.apache.hadoop.ipc.RemoteException: java.io.IOException: Got blockRec
 eived message from unregistered or dead node blk_-4787262105551508952_28369
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.blockReceived(FSNamesystem.java:4188)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.blockReceived(NameNode.java:1069)
 at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)

 at org.apache.hadoop.ipc.Client.call(Client.java:1107)
 at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
 at sun.proxy.$Proxy5.blockReceived(Unknown Source)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:1006)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1527)
 at java.lang.Thread.run(Thread.java:662)

 *Exception in name node :- *

 2013-07-31 19:39:50,671 WARN org.apache.hadoop.hdfs.StateChange: BLOCK*
 NameSystem.blockReceived: blk_-4787262105551508952_28369 is received from
 dead
  or unregistered node 192.168.20.30:50010
 2013-07-31 19:39:50,671 ERROR
 org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
 as:hadoop cause:java.io.IOException: Got blo
 ckReceived message from unregistered or dead node
 blk_-4787262105551508952_28369
 2013-07-31 19:39:50,671 INFO org.apache.hadoop.ipc.Server: IPC Server
 handler 5 on 9000, call blockReceived(DatanodeRegistration(
 192.168.20.30:50010,
 storageID=DS-1816106352-192.168.20.30-50010-1369314076237, infoPort=50075,
 ipcPort=50020), [Lorg.apache.hadoop.hdfs.protocol.Block;@64f2d559,
 [Ljava.l
 ang.String;@294f9d6) from 192.168.20.30:59764: error:
 java.io.IOException: Got blockReceived message from unregistered or dead
 node blk_-4787262105551
 508952_28369
 java.io.IOException: Got blockReceived message from unregistered or dead
 node blk_-4787262105551508952_28369
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.blockReceived(FSNamesystem.java:4188)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.blockReceived(NameNode.java:1069)
 at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)


 --
 Thanks and Regards,
 Vimal Jain




-- 
Thanks and Regards,
Vimal Jain


Re: Exceptions in Name node and Data node logs

2013-08-13 Thread Vimal Jain
 org.apache.hadoop.hbase.util.Sleeper: We slept
78562ms instead of 6ms, this is likely due to a long garbage collectin
g pause and it's usually bad, see
http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired


*Exception in Region log :
*
java.io.IOException: Reflection
at
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:304)
at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1375)
at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1319)
at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1480)
at
org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.run(HLog.java:1271)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedMethodAccessor156.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:302)
... 5 more
Caused by: java.io.IOException: DFSOutputStream is closed
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.sync(DFSClient.java:3669)
at
org.apache.hadoop.fs.FSDataOutputStream.sync(FSDataOutputStream.java:97)
at
org.apache.hadoop.io.SequenceFile$Writer.syncFs(SequenceFile.java:995)
... 9 more
2013-07-31 15:50:37,761 FATAL
org.apache.hadoop.hbase.regionserver.wal.HLog: Could not sync. Requesting
close of hlog

2013-07-31 19:23:38,289 INFO org.apache.hadoop.hdfs.DFSClient: Could not
complete file 
/hbase/.logs/hbase.xyz.com,60020,1370497170634/hbase.xyz.com%2C60020%2C1370497170634.1375265949987
retrying...
2013-07-31 19:23:38,289 INFO org.apache.hadoop.hdfs.DFSClient: Could not
complete file 
/hbase/.logs/hbase.xyz.com,60020,1370497170634/hbase.xyz.com%2C60020%2C1370497170634.1375265949987
retrying...



On Tue, Aug 13, 2013 at 1:56 PM, Vimal Jain vkj...@gmail.com wrote:

 Sorry for not giving version details
 I am using Hadoop version - 1.1.2  and Hbase version - 0.94.7


 On Tue, Aug 13, 2013 at 1:53 PM, Vimal Jain vkj...@gmail.com wrote:

 Hi,
 I have configured Hadoop and Hbase in pseudo distributed mode.
 So far things were working fine , but suddenly i started receiving some
 exceptions in my namenode and datanode log files.
 It keeps repeating and thus fills up my disk space.
 Please help here.

 *Exception in data node :-*

 2013-07-31 19:39:51,094 WARN
 org.apache.hadoop.hdfs.server.datanode.DataNode:
 org.apache.hadoop.ipc.RemoteException: java.io.IOException: Got blockRec
 eived message from unregistered or dead node
 blk_-4787262105551508952_28369
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.blockReceived(FSNamesystem.java:4188)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.blockReceived(NameNode.java:1069)
 at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)

 at org.apache.hadoop.ipc.Client.call(Client.java:1107)
 at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
 at sun.proxy.$Proxy5.blockReceived(Unknown Source)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:1006)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1527)
 at java.lang.Thread.run(Thread.java:662)

 *Exception in name node :- *

 2013-07-31 19:39:50,671 WARN org.apache.hadoop.hdfs.StateChange: BLOCK*
 NameSystem.blockReceived: blk_-4787262105551508952_28369 is received from
 dead
  or unregistered node 192.168.20.30:50010
 2013-07-31 19:39:50,671 ERROR
 org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
 as:hadoop cause:java.io.IOException: Got blo
 ckReceived message from unregistered or dead node
 blk_-4787262105551508952_28369
 2013-07-31 19:39:50,671 INFO org.apache.hadoop.ipc.Server: IPC Server
 handler 5 on 9000, call blockReceived(DatanodeRegistration(
 192.168.20.30:50010,
 storageID=DS-1816106352-192.168.20.30-50010-1369314076237,
 infoPort=50075, ipcPort=50020),
 [Lorg.apache.hadoop.hdfs.protocol.Block;@64f2d559, [Ljava.l
 ang.String;@294f9d6) from 192.168.20.30:59764: error:
 java.io.IOException: Got blockReceived message from unregistered or dead
 node blk_-4787262105551
 508952_28369
 java.io.IOException: Got

Re: Exceptions in Name node and Data node logs

2013-08-13 Thread Vimal Jain
Hi Jitendra,
Thanks for your reply.
Currently my hadoop/hbase is down in production as it had filled up the
disk space with above exceptions in log files  and had to be brought down.
Also i am using hadoop/hbase  in pseudo distributed mode , so there is only
one node which hosts all 6 processes ( 3 from hadoop and 3 from hbase).



On Tue, Aug 13, 2013 at 2:50 PM, Jitendra Yadav
jeetuyadav200...@gmail.comwrote:

 Hi,

 One of your DN is marked as dead because NN is not able to get heartbeat
 message from DN but NN still getting block information from dead node. This
 error is similar to a bug *HDFS-1250* reported 2 years back and fixed in
 0.20 release.

 Can you please check the status of DN's in cluster.

 #bin/hadoop dfsadmin -report

 Thanks

 On Tue, Aug 13, 2013 at 1:53 PM, Vimal Jain vkj...@gmail.com wrote:

   Hi,
 I have configured Hadoop and Hbase in pseudo distributed mode.
 So far things were working fine , but suddenly i started receiving some
 exceptions in my namenode and datanode log files.
 It keeps repeating and thus fills up my disk space.
  Please help here.

 *Exception in data node :-*

 2013-07-31 19:39:51,094 WARN
 org.apache.hadoop.hdfs.server.datanode.DataNode:
 org.apache.hadoop.ipc.RemoteException: java.io.IOException: Got blockRec
 eived message from unregistered or dead node
 blk_-4787262105551508952_28369
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.blockReceived(FSNamesystem.java:4188)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.blockReceived(NameNode.java:1069)
 at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)

 at org.apache.hadoop.ipc.Client.call(Client.java:1107)
 at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
 at sun.proxy.$Proxy5.blockReceived(Unknown Source)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:1006)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1527)
 at java.lang.Thread.run(Thread.java:662)

 *Exception in name node :- *

 2013-07-31 19:39:50,671 WARN org.apache.hadoop.hdfs.StateChange: BLOCK*
 NameSystem.blockReceived: blk_-4787262105551508952_28369 is received from
 dead
  or unregistered node 192.168.20.30:50010
 2013-07-31 19:39:50,671 ERROR
 org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
 as:hadoop cause:java.io.IOException: Got blo
 ckReceived message from unregistered or dead node
 blk_-4787262105551508952_28369
 2013-07-31 19:39:50,671 INFO org.apache.hadoop.ipc.Server: IPC Server
 handler 5 on 9000, call blockReceived(DatanodeRegistration(
 192.168.20.30:50010,
 storageID=DS-1816106352-192.168.20.30-50010-1369314076237,
 infoPort=50075, ipcPort=50020),
 [Lorg.apache.hadoop.hdfs.protocol.Block;@64f2d559, [Ljava.l
 ang.String;@294f9d6) from 192.168.20.30:59764: error:
 java.io.IOException: Got blockReceived message from unregistered or dead
 node blk_-4787262105551
 508952_28369
 java.io.IOException: Got blockReceived message from unregistered or dead
 node blk_-4787262105551508952_28369
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.blockReceived(FSNamesystem.java:4188)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.blockReceived(NameNode.java:1069)
 at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)


 --
 Thanks and Regards,
 Vimal Jain





-- 
Thanks and Regards,
Vimal Jain


Re: Exceptions in Name node and Data node logs

2013-08-13 Thread Vimal Jain
Hi,
As Jitendra pointed out , this issue was fixed in .20 version.
I am using Hadoop 1.1.2 so why its occurring again ?
Please help here.


On Tue, Aug 13, 2013 at 2:56 PM, Vimal Jain vkj...@gmail.com wrote:

 Hi Jitendra,
 Thanks for your reply.
 Currently my hadoop/hbase is down in production as it had filled up the
 disk space with above exceptions in log files  and had to be brought down.
 Also i am using hadoop/hbase  in pseudo distributed mode , so there is
 only one node which hosts all 6 processes ( 3 from hadoop and 3 from hbase).



 On Tue, Aug 13, 2013 at 2:50 PM, Jitendra Yadav 
 jeetuyadav200...@gmail.com wrote:

 Hi,

 One of your DN is marked as dead because NN is not able to get heartbeat
 message from DN but NN still getting block information from dead node. This
 error is similar to a bug *HDFS-1250* reported 2 years back and fixed in
 0.20 release.

 Can you please check the status of DN's in cluster.

 #bin/hadoop dfsadmin -report

 Thanks

 On Tue, Aug 13, 2013 at 1:53 PM, Vimal Jain vkj...@gmail.com wrote:

   Hi,
 I have configured Hadoop and Hbase in pseudo distributed mode.
 So far things were working fine , but suddenly i started receiving some
 exceptions in my namenode and datanode log files.
 It keeps repeating and thus fills up my disk space.
  Please help here.

 *Exception in data node :-*

 2013-07-31 19:39:51,094 WARN
 org.apache.hadoop.hdfs.server.datanode.DataNode:
 org.apache.hadoop.ipc.RemoteException: java.io.IOException: Got blockRec
 eived message from unregistered or dead node
 blk_-4787262105551508952_28369
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.blockReceived(FSNamesystem.java:4188)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.blockReceived(NameNode.java:1069)
 at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)

 at org.apache.hadoop.ipc.Client.call(Client.java:1107)
 at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
 at sun.proxy.$Proxy5.blockReceived(Unknown Source)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:1006)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1527)
 at java.lang.Thread.run(Thread.java:662)

 *Exception in name node :- *

 2013-07-31 19:39:50,671 WARN org.apache.hadoop.hdfs.StateChange: BLOCK*
 NameSystem.blockReceived: blk_-4787262105551508952_28369 is received from
 dead
  or unregistered node 192.168.20.30:50010
 2013-07-31 19:39:50,671 ERROR
 org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
 as:hadoop cause:java.io.IOException: Got blo
 ckReceived message from unregistered or dead node
 blk_-4787262105551508952_28369
 2013-07-31 19:39:50,671 INFO org.apache.hadoop.ipc.Server: IPC Server
 handler 5 on 9000, call blockReceived(DatanodeRegistration(
 192.168.20.30:50010,
 storageID=DS-1816106352-192.168.20.30-50010-1369314076237,
 infoPort=50075, ipcPort=50020),
 [Lorg.apache.hadoop.hdfs.protocol.Block;@64f2d559, [Ljava.l
 ang.String;@294f9d6) from 192.168.20.30:59764: error:
 java.io.IOException: Got blockReceived message from unregistered or dead
 node blk_-4787262105551
 508952_28369
 java.io.IOException: Got blockReceived message from unregistered or dead
 node blk_-4787262105551508952_28369
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.blockReceived(FSNamesystem.java:4188)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.blockReceived(NameNode.java:1069)
 at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)


 --
 Thanks and Regards,
 Vimal Jain





 --
 Thanks and Regards,
 Vimal Jain




-- 
Thanks and Regards,
Vimal Jain


Get Hadoop update

2013-05-22 Thread Vimal Jain
Hi,
I would like to receive Hadoop notifications.

-- 
Thanks and Regards,
Vimal Jain