Any ideas? Anyone?
On Wed, Aug 28, 2013 at 9:36 AM, Ameya Kanitkar wrote:
> Thanks for your response.
>
> I checked namenode logs and I find following:
>
> 2013-08-28 15:25:24,025 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: recoverLease: recover
> lease [Lease. Holder:
> DFSCli
Thanks for your response.
I checked namenode logs and I find following:
2013-08-28 15:25:24,025 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: recoverLease: recover
lease [Lease. Holder:
DFSClient_hb_rs_smartdeals-hbase14-snc1.snc1,60020,1377700014053_-346895658_25,
pendingcreates: 1]
>From the log you posted on pastebin, I see the following.
Can you check namenode log to see what went wrong ?
1. Caused by:
org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on
/hbase/.logs/smartdeals-hbase14-snc1.snc1,60020,1376944419197/smartdeals-hbase14-snc1.sn
ver has already
expired. Setting your setCaching value lower might help in this case
Regards,
Dhaval
From: Ameya Kanitkar
To: user@hbase.apache.org
Sent: Wednesday, 28 August 2013 11:00 AM
Subject: Lease Exception Errors When Running Heavy Map Reduce Job
HI
HI All,
We have a very heavy map reduce job that goes over entire table with over
1TB+ data in HBase and exports all data (Similar to Export job but with
some additional custom code built in) to HDFS.
However this job is not very stable, and often times we get following error
and job fails:
org.