Re: thrift.ProcessFunction: Internal error processing get

2015-08-25 Thread Chandrashekhar Kotekar
Issue resolved. Startup script shipped with CDH5.3  starts thrift service
but in source code hbase.thrift file is for Thrift2. So we need to stop
Thrift server and start Thrift2 server. After starting Thrift2 server
my problem got resolved.


Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455

On Tue, Aug 25, 2015 at 9:19 AM, Chandrashekhar Kotekar 
shekhar.kote...@gmail.com wrote:

 Did it worked on your system? Are you able to get some row from HBase
 using Node.js?


 Regards,
 Chandrash3khar Kotekar
 Mobile - +91 8600011455

 On Mon, Aug 24, 2015 at 9:22 PM, Ted Yu yuzhih...@gmail.com wrote:

 When I clicked on http://pastebin.com/embed.php?i=r9uqr8iN , I didn't
 see 'Internal
 error ' message.
 I tried the above operation both at home and at work.

 Can you double check your code, considering the 'Invalid method name'
 message ?

 Thanks

 On Mon, Aug 24, 2015 at 7:32 AM, Chandrashekhar Kotekar 
 shekhar.kote...@gmail.com wrote:

  Hi,
 
  I have generated node.js files using Thrift and trying to get a single
 row
  from HBase. I am getting thrift.ProcessFunction: Internal error
 processing
  get http://pastebin.com/embed.php?i=r9uqr8iN error when I execute
  Node.js
  code. When I try to put dummy column to existing row then I get this
 error
  from node.js http://pastebin.com/raw.php?i=kjXtvxjV. There's no
 error on
  thrift server side for 'put' operation.
 
  Can anyone please help?
 
  Regards,
  Chandrash3khar Kotekar
  Mobile - +91 8600011455
 





Re: Thrift node.js code not working

2015-08-24 Thread Chandrashekhar Kotekar
Figured it out. I was passing wrong javascript object to that method but
now getting ' ERROR thrift.ProcessFunction: Internal error processing get'
error in Thrift server logs. Shall I write separate thread about this error?


Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455

On Mon, Aug 24, 2015 at 6:45 PM, Ted Yu yuzhih...@gmail.com wrote:

 Looking at pom.xml in 0.98 branch I see:
thrift.version0.9.0/thrift.version

 Not sure which thrift version is used in CDH.

 BTW thrift2 is supported in 0.98
 Take a look at hbase-thrift module and its git log.

 Cheers

 On Mon, Aug 24, 2015 at 5:47 AM, Chandrashekhar Kotekar 
 shekhar.kote...@gmail.com wrote:

  I am using HBase 0.98.6 which is shipped with CDH 5.3.0, Thrift compiler
  version is 0.9.2 and I guess I have started HBase thrift server. I am not
  sure if Thrift2 is available with 0.98.6 or not and even if it is
 available
  not sure how do I start thrift2 service.
 
 
  Regards,
  Chandrash3khar Kotekar
  Mobile - +91 8600011455
 
  On Mon, Aug 24, 2015 at 6:15 PM, Ted Yu yuzhih...@gmail.com wrote:
 
   Which hbase release are you using ?
  
   Which version of thrift do you use in your app ?
  
   Thanks
  
  
  
On Aug 24, 2015, at 5:00 AM, Chandrashekhar Kotekar 
   shekhar.kote...@gmail.com wrote:
   
Hi,
   
I am trying to use following code to test HBase Thrift interface for
Node.js but it is not working.
   
*var thrift = require('thrift');*
*var hbase = require('./gen-nodejs/THBaseService');*
*var hbaseTypes = require('./gen-nodejs/hbase_types');*
   
*var connection = thrift.createConnection('nn2', 9090, {*
*  transport: thrift.TBufferedTransport//,*
*  //protocol : thrift.TBinaryProtocol*
*});*
*console.log('connection : ' + connection );*
   
*var client = thrift.createClient(hbase, connection);*
*for(a in client) {*
*console.log(a);*
*}*
   
*connection.on('connect', function(){*
*  console.log('connected to hbase.');*
*  client.get('AD_COMPANY_V1', '028fffac57101a1fa5f9aa53a6d0',
 'CF:Id',
null, function(err, data){*
*console.log(data);*
*  });*
*  connection.end();*
*});*
   
*connection.on('error', function(err){*
*  console.log('error while connecting : ', err);*
*});*
   
   
Whenever I execute this code using node index.js command I get
   following
error :
   
/home/ubuntu/shekhar/thrift/client/gen-nodejs/THBaseService.js:228
   this.get.write(output);
^
TypeError: undefined is not a function
   at Object.THBaseService_get_args.write
   
 (/home/ubuntu/shekhar/thrift/client/gen-nodejs/THBaseService.js:228:14)
   at Object.THBaseServiceClient.send_get
   
 (/home/ubuntu/shekhar/thrift/client/gen-nodejs/THBaseService.js:2652:8)
   at Object.THBaseServiceClient.get
   
  (/home/ubuntu/shekhar/thrift/client/gen-nodejs/THBaseService.js:2642:10)
   at null.anonymous
   (/home/ubuntu/shekhar/thrift/client/index.js:15:10)
   at emit (events.js:104:17)
   at Socket.anonymous
   
  
 
 (/home/ubuntu/shekhar/thrift/client/node_modules/thrift/lib/thrift/connection.js:73:10)
   at Socket.emit (events.js:129:20)
   at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1001:10)
   
   
Any idea why this error occurs?
   
Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455
  
 



Re: Thrift node.js code not working

2015-08-24 Thread Chandrashekhar Kotekar
I am using HBase 0.98.6 which is shipped with CDH 5.3.0, Thrift compiler
version is 0.9.2 and I guess I have started HBase thrift server. I am not
sure if Thrift2 is available with 0.98.6 or not and even if it is available
not sure how do I start thrift2 service.


Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455

On Mon, Aug 24, 2015 at 6:15 PM, Ted Yu yuzhih...@gmail.com wrote:

 Which hbase release are you using ?

 Which version of thrift do you use in your app ?

 Thanks



  On Aug 24, 2015, at 5:00 AM, Chandrashekhar Kotekar 
 shekhar.kote...@gmail.com wrote:
 
  Hi,
 
  I am trying to use following code to test HBase Thrift interface for
  Node.js but it is not working.
 
  *var thrift = require('thrift');*
  *var hbase = require('./gen-nodejs/THBaseService');*
  *var hbaseTypes = require('./gen-nodejs/hbase_types');*
 
  *var connection = thrift.createConnection('nn2', 9090, {*
  *  transport: thrift.TBufferedTransport//,*
  *  //protocol : thrift.TBinaryProtocol*
  *});*
  *console.log('connection : ' + connection );*
 
  *var client = thrift.createClient(hbase, connection);*
  *for(a in client) {*
  *console.log(a);*
  *}*
 
  *connection.on('connect', function(){*
  *  console.log('connected to hbase.');*
  *  client.get('AD_COMPANY_V1', '028fffac57101a1fa5f9aa53a6d0', 'CF:Id',
  null, function(err, data){*
  *console.log(data);*
  *  });*
  *  connection.end();*
  *});*
 
  *connection.on('error', function(err){*
  *  console.log('error while connecting : ', err);*
  *});*
 
 
  Whenever I execute this code using node index.js command I get
 following
  error :
 
  /home/ubuntu/shekhar/thrift/client/gen-nodejs/THBaseService.js:228
 this.get.write(output);
  ^
  TypeError: undefined is not a function
 at Object.THBaseService_get_args.write
  (/home/ubuntu/shekhar/thrift/client/gen-nodejs/THBaseService.js:228:14)
 at Object.THBaseServiceClient.send_get
  (/home/ubuntu/shekhar/thrift/client/gen-nodejs/THBaseService.js:2652:8)
 at Object.THBaseServiceClient.get
  (/home/ubuntu/shekhar/thrift/client/gen-nodejs/THBaseService.js:2642:10)
 at null.anonymous
 (/home/ubuntu/shekhar/thrift/client/index.js:15:10)
 at emit (events.js:104:17)
 at Socket.anonymous
 
 (/home/ubuntu/shekhar/thrift/client/node_modules/thrift/lib/thrift/connection.js:73:10)
 at Socket.emit (events.js:129:20)
 at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1001:10)
 
 
  Any idea why this error occurs?
 
  Regards,
  Chandrash3khar Kotekar
  Mobile - +91 8600011455



Re: Thrift node.js code not working

2015-08-24 Thread Chandrashekhar Kotekar
Okay. thanks.


Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455

On Mon, Aug 24, 2015 at 7:43 PM, Ted Yu yuzhih...@gmail.com wrote:

 Separate thread would be good - with pastebin of relevant error /
 exception.

 Please check region server log as well.



  On Aug 24, 2015, at 7:05 AM, Chandrashekhar Kotekar 
 shekhar.kote...@gmail.com wrote:
 
  Figured it out. I was passing wrong javascript object to that method but
  now getting ' ERROR thrift.ProcessFunction: Internal error processing
 get'
  error in Thrift server logs. Shall I write separate thread about this
 error?
 
 
  Regards,
  Chandrash3khar Kotekar
  Mobile - +91 8600011455
 
  On Mon, Aug 24, 2015 at 6:45 PM, Ted Yu yuzhih...@gmail.com wrote:
 
  Looking at pom.xml in 0.98 branch I see:
thrift.version0.9.0/thrift.version
 
  Not sure which thrift version is used in CDH.
 
  BTW thrift2 is supported in 0.98
  Take a look at hbase-thrift module and its git log.
 
  Cheers
 
  On Mon, Aug 24, 2015 at 5:47 AM, Chandrashekhar Kotekar 
  shekhar.kote...@gmail.com wrote:
 
  I am using HBase 0.98.6 which is shipped with CDH 5.3.0, Thrift
 compiler
  version is 0.9.2 and I guess I have started HBase thrift server. I am
 not
  sure if Thrift2 is available with 0.98.6 or not and even if it is
  available
  not sure how do I start thrift2 service.
 
 
  Regards,
  Chandrash3khar Kotekar
  Mobile - +91 8600011455
 
  On Mon, Aug 24, 2015 at 6:15 PM, Ted Yu yuzhih...@gmail.com wrote:
 
  Which hbase release are you using ?
 
  Which version of thrift do you use in your app ?
 
  Thanks
 
 
 
  On Aug 24, 2015, at 5:00 AM, Chandrashekhar Kotekar 
  shekhar.kote...@gmail.com wrote:
 
  Hi,
 
  I am trying to use following code to test HBase Thrift interface for
  Node.js but it is not working.
 
  *var thrift = require('thrift');*
  *var hbase = require('./gen-nodejs/THBaseService');*
  *var hbaseTypes = require('./gen-nodejs/hbase_types');*
 
  *var connection = thrift.createConnection('nn2', 9090, {*
  *  transport: thrift.TBufferedTransport//,*
  *  //protocol : thrift.TBinaryProtocol*
  *});*
  *console.log('connection : ' + connection );*
 
  *var client = thrift.createClient(hbase, connection);*
  *for(a in client) {*
  *console.log(a);*
  *}*
 
  *connection.on('connect', function(){*
  *  console.log('connected to hbase.');*
  *  client.get('AD_COMPANY_V1', '028fffac57101a1fa5f9aa53a6d0',
  'CF:Id',
  null, function(err, data){*
  *console.log(data);*
  *  });*
  *  connection.end();*
  *});*
 
  *connection.on('error', function(err){*
  *  console.log('error while connecting : ', err);*
  *});*
 
 
  Whenever I execute this code using node index.js command I get
  following
  error :
 
  /home/ubuntu/shekhar/thrift/client/gen-nodejs/THBaseService.js:228
this.get.write(output);
 ^
  TypeError: undefined is not a function
at Object.THBaseService_get_args.write
  (/home/ubuntu/shekhar/thrift/client/gen-nodejs/THBaseService.js:228:14)
at Object.THBaseServiceClient.send_get
  (/home/ubuntu/shekhar/thrift/client/gen-nodejs/THBaseService.js:2652:8)
at Object.THBaseServiceClient.get
 
 (/home/ubuntu/shekhar/thrift/client/gen-nodejs/THBaseService.js:2642:10)
at null.anonymous
  (/home/ubuntu/shekhar/thrift/client/index.js:15:10)
at emit (events.js:104:17)
at Socket.anonymous
 
 (/home/ubuntu/shekhar/thrift/client/node_modules/thrift/lib/thrift/connection.js:73:10)
at Socket.emit (events.js:129:20)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1001:10)
 
 
  Any idea why this error occurs?
 
  Regards,
  Chandrash3khar Kotekar
  Mobile - +91 8600011455
 



thrift.ProcessFunction: Internal error processing get

2015-08-24 Thread Chandrashekhar Kotekar
Hi,

I have generated node.js files using Thrift and trying to get a single row
from HBase. I am getting thrift.ProcessFunction: Internal error processing
get http://pastebin.com/embed.php?i=r9uqr8iN error when I execute Node.js
code. When I try to put dummy column to existing row then I get this error
from node.js http://pastebin.com/raw.php?i=kjXtvxjV. There's no error on
thrift server side for 'put' operation.

Can anyone please help?

Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455


Thrift node.js code not working

2015-08-24 Thread Chandrashekhar Kotekar
Hi,

I am trying to use following code to test HBase Thrift interface for
Node.js but it is not working.

*var thrift = require('thrift');*
*var hbase = require('./gen-nodejs/THBaseService');*
*var hbaseTypes = require('./gen-nodejs/hbase_types');*

*var connection = thrift.createConnection('nn2', 9090, {*
*  transport: thrift.TBufferedTransport//,*
*  //protocol : thrift.TBinaryProtocol*
*});*
*console.log('connection : ' + connection );*

*var client = thrift.createClient(hbase, connection);*
*for(a in client) {*
*console.log(a);*
*}*

*connection.on('connect', function(){*
*  console.log('connected to hbase.');*
*  client.get('AD_COMPANY_V1', '028fffac57101a1fa5f9aa53a6d0', 'CF:Id',
null, function(err, data){*
*console.log(data);*
*  });*
*  connection.end();*
*});*

*connection.on('error', function(err){*
*  console.log('error while connecting : ', err);*
*});*


Whenever I execute this code using node index.js command I get following
error :

/home/ubuntu/shekhar/thrift/client/gen-nodejs/THBaseService.js:228
this.get.write(output);
 ^
TypeError: undefined is not a function
at Object.THBaseService_get_args.write
(/home/ubuntu/shekhar/thrift/client/gen-nodejs/THBaseService.js:228:14)
at Object.THBaseServiceClient.send_get
(/home/ubuntu/shekhar/thrift/client/gen-nodejs/THBaseService.js:2652:8)
at Object.THBaseServiceClient.get
(/home/ubuntu/shekhar/thrift/client/gen-nodejs/THBaseService.js:2642:10)
at null.anonymous (/home/ubuntu/shekhar/thrift/client/index.js:15:10)
at emit (events.js:104:17)
at Socket.anonymous
(/home/ubuntu/shekhar/thrift/client/node_modules/thrift/lib/thrift/connection.js:73:10)
at Socket.emit (events.js:129:20)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1001:10)


Any idea why this error occurs?

Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455


Re: thrift.ProcessFunction: Internal error processing get

2015-08-24 Thread Chandrashekhar Kotekar
Did it worked on your system? Are you able to get some row from HBase using
Node.js?


Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455

On Mon, Aug 24, 2015 at 9:22 PM, Ted Yu yuzhih...@gmail.com wrote:

 When I clicked on http://pastebin.com/embed.php?i=r9uqr8iN , I didn't
 see 'Internal
 error ' message.
 I tried the above operation both at home and at work.

 Can you double check your code, considering the 'Invalid method name'
 message ?

 Thanks

 On Mon, Aug 24, 2015 at 7:32 AM, Chandrashekhar Kotekar 
 shekhar.kote...@gmail.com wrote:

  Hi,
 
  I have generated node.js files using Thrift and trying to get a single
 row
  from HBase. I am getting thrift.ProcessFunction: Internal error
 processing
  get http://pastebin.com/embed.php?i=r9uqr8iN error when I execute
  Node.js
  code. When I try to put dummy column to existing row then I get this
 error
  from node.js http://pastebin.com/raw.php?i=kjXtvxjV. There's no error
 on
  thrift server side for 'put' operation.
 
  Can anyone please help?
 
  Regards,
  Chandrash3khar Kotekar
  Mobile - +91 8600011455
 



Re: HBase co-processor performance

2015-07-16 Thread Chandrashekhar Kotekar
Hi,

Thanks for the inputs. As you said, it is better to change database design
than moving this business logic to co-processors, and sorry for duplicate
mail. I guess duplicate mail was in my mobile's outbox and after syncing
mobile that mail was sent.


Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455

On Wed, Jul 15, 2015 at 12:40 PM, anil gupta anilgupt...@gmail.com wrote:

 Using coprocessor to make calls to other Tables or remote Regions is an
 ANTI-PATTERN. It will create cyclic dependency between RS in your cluster.
 Coprocessors should be strictly used for operation on local Regions. Search
 mailing archives for more detailed discussion on this topic.

 How about denormalizing the data and then just doing ONE call? Now, this
 becomes more of a data modeling question.

 Thanks,
 Anil Gupta


 On Tue, Jul 14, 2015 at 11:39 PM, Chandrashekhar Kotekar 
 shekhar.kote...@gmail.com wrote:

  Hi,
 
  REST APIs of my project make 2-3 calls to different tables in HBase.
 These
  calls are taking 10s of milli seconds to finish.
 
  I would like to know
 
  1) If moving business logic to HBase co-processors and/or observer will
  improve performance?
 
  Idea is like to pass all the related information to HBase co-processors
  and/or observer, co-processor will make those 2-3 calls to different
 HBase
  tables and return result to the client.
 
  2) I wonder if this approach will reduce time to finish or is it a bad
  approach?
 
  3) If co-processor running on one region server fetches data from other
  region server then it will be same as tomcat server fetching that data
 from
  HBase region server. Isn't it?
 
  Regards,
  Chandrash3khar Kotekar
  Mobile - +91 8600011455
 



 --
 Thanks  Regards,
 Anil Gupta



HBase co-processor performance

2015-07-15 Thread Chandrashekhar Kotekar
Hi,

REST APIs of my project make 2-3 calls to different tables in HBase. These
calls are taking 10s of milli seconds to finish.

I would like to know

1) If moving business logic to HBase co-processors and/or observer will
improve performance?

Idea is like to pass all the related information to HBase co-processors
and/or observer, co-processor will make those 2-3 calls to different HBase
tables and return result to the client.

2) I wonder if this approach will reduce time to finish or is it a bad
approach?

3) If co-processor running on one region server fetches data from other
region server then it will be same as tomcat server fetching that data from
HBase region server. Isn't it?

Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455


Performance of co-processor and observer while fetching data from other RS

2015-07-15 Thread Chandrashekhar Kotekar
Hi,

REST APIs of my project make 2-3 calls to different tables in HBase. These
calls are taking 10s of milli seconds to finish.

I would like to know

1) If moving business logic to HBase co-processors and/or observer will
improve performance?

Idea is like to pass all the related information to HBase co-processors
and/or observer, co-processor will make those 2-3 calls to different HBase
tables and return result to the client.

2) I wonder if this approach will reduce time to finish or is it a bad
approach?

3) If co-processor running on one region server fetches data from other
region server then it will be same as tomcat server fetching that data from
HBase region server. Isn't it?


Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455


Is it possible to execute co-processors like cron job?

2015-06-19 Thread Chandrashekhar Kotekar
Hi,

Can you please help to see if there is any feature in HBase which will
satisfy following?

1) I need something which will be native to HBase and

2) Which will be able to execute some code at certain interval like daily
or weekly.

Is there any HBase feature which satisfies these two conditions?

I think coprocessors can execute some custom code but I am not sure if
coprocessors can be executed like cron jobs at certain intervals. Is it
possible?

What are other mechanism to achieve this? I am looking for something which
is built-in into HBase itself and want to avoid dependency on external
mechanisms like cron jobs.

Thanks,
Chandrash3khar Kotekar
Mobile - +91 8600011455


How to create HTableInterface object per thread in REST API?

2015-06-05 Thread Chandrashekhar Kotekar
Hello everyone,

We have a REST API which communicates with HBase for CRUD operations.
During load testing we saw that REST API throws String index out of range
exception if multiple parallel requests try to insert hundreds of cells in
HBase.

After looking at HBase code and after reading HBase API document, I came to
know that HTableInterface objects are not thread safe and it is recommended
to create separate HTableInterface object per thread.

As REST APIs are nothing but servlets and thread management is done by web
server (we are using tomcat and jersey for this project) I would like to
know what is the way to ensure that web server/servlets create
HTableInterface object for each thread.

I have created connection object as shown below :

return HConnectionManager.createConnection(config,
Executors.newCachedThreadPool());

and create HTableInterface object as below:

protected HTableInterface hTableInterface;

public insertRecord(String recordDetails) {
  this.hTableInterface = connection.getTable(readings_table,
Executors.newCachedThreadPool());
  // rest of the operations regarding parsing record
  try {
   this.hTableInterface.put(recod);
  } catch(Exception ex) {
   // log exception
  } finally {
   this.hTableInterface.flushCommits();
   this.hTableInterface.close();
  }


}

Here hTableInterface object is member of the class.

but this isn't helping. Is there any other way to ensure that separate
HTableInterface object is created for each thread?

Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455


HBase copyTable stuck at map 100% reduce 0%

2015-05-02 Thread Chandrashekhar Kotekar
Hi,

I am copying table from primary cluster to backup cluster using copyTable
command but m-r job spawned by this command is stuck at map 100% reduce
0%.

Command used is :  hbase org.apache.hadoop.hbase.mapreduce.CopyTable
-Dhbase.client.scanner.caching=100 --peer.adr=target-nn1:/hbase

Here is some part of  the logs where job is stuck since last 15 minutes.

15/05/02 09:49:17 INFO mapreduce.JobSubmitter: number of splits:1
15/05/02 09:49:17 INFO Configuration.deprecation: io.bytes.per.checksum is
deprecated. Instead, use dfs.bytes-per-checksum
15/05/02 09:49:17 INFO mapreduce.JobSubmitter: Submitting tokens for job:
job_1424936551928_0181
15/05/02 09:49:18 INFO impl.YarnClientImpl: Submitted application
application_1424936551928_0181
15/05/02 09:49:18 INFO mapreduce.Job: The url to track the job:
http://resource-manager:8088/proxy/application_1424936551928_0181/
15/05/02 09:49:18 INFO mapreduce.Job: Running job: job_1424936551928_0181
15/05/02 09:49:27 INFO mapreduce.Job: Job job_1424936551928_0181 running in
uber mode : false
15/05/02 09:49:27 INFO mapreduce.Job:  map 0% reduce 0%
15/05/02 09:49:40 INFO mapreduce.Job:  map 100% reduce 0%


Does anyone knows why this m-r job gets hang at reduce 0%? I am not able to
see m-r jobs logs also because this cluster is in VPC, can't reach yarn job
web UI.

Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455


Cluster replication is not replicating any data

2015-04-28 Thread Chandrashekhar Kotekar
Hi,

I have setup cluster replication between 2 clusters but data is not getting
copied. Can anyone please help me in cluster replication?

I would like to know if I add only 1 row in one of the tables then will
that row get replicated in other cluster or HBase waits for some time or
HBase waits before certain amount of data is added/deleted/edited (like
64MB)?

Clusters are based on CDH 5.3.1 which have 0.98.6 version of HBase. I have
named slave nodes of both the clusters as slave1 and slave2 and master
server is named as hbase-master (This setting is in /etc/hosts file).

I took following steps to setup replication:
1) Set value of 'hbase.replication' property in 'hbase-site.xml' file to
true in both the clusters

2) Restarted HBase master and all region servers of both the clusters.
(Restarting needs to be done in any specifi order?? like master first then
region server or vice versa??)

3) Disabled, Altered all the tables in both the clusters by adding {NAME
= 'CF', REPLICATION_SCOPE = 1} in alter statement, and then enabled
tables one by one. (CF is actual column family name)

4) Added peer using add_peer '1', '10.0.21.249:2181:/hbase'  in primary
cluster.

5) added peer using add_peer 'nc' '10.0.21.111:/hbase ' in target cluster.

6) Primary cluster already has 4 rows. Added one more row having one column
for COMPANY_TABLE into primary cluster. So expecting either total 5 rows in
target cluster or at least newly added row in target cluster. Waited few
minutes for row to get replicated into other cluster. Executed scan
'table_name' command in target cluster but didnt get that row from primary
cluster.

7) Executed following verify replication command in primary cluster. This
command runs m-r job and shows 4 good rows but in target cluster no row
present for COMPANY_TABLE table.

hbase org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication
--families=CF nc COMPANY_TABLE


There's no error in region server logs. Sample logs from primary cluster's
region server is as as below:

15/04/28 10:48:16 INFO regionserver.HRegionServer: Adding moved region
record: 1a685b27708dfea86bb2d8a9ca1bceb5 to
ip-10-0-21-90.ec2.internal,60020,1430217861644:60020 as of 4873480
15/04/28 10:51:06 INFO regionserver.Replication: Normal source for cluster
nc: Total replicated edits: 0, currently replicating from:
hdfs://StagingCluster/hbase/WALs/slave2,60020,1430201462672/slave2%2C60020%2C1430201462672.1430215999727
at position: 83

15/04/28 10:56:06 INFO regionserver.Replication: Normal source for cluster
nc: Total replicated edits: 0, currently replicating from:
hdfs://StagingCluster/hbase/WALs/slave2,60020,1430201462672/slave2%2C60020%2C1430201462672.1430215999727
at position: 83

15/04/28 11:01:06 INFO regionserver.Replication: Normal source for cluster
nc: Total replicated edits: 0, currently replicating from:
hdfs://StagingCluster/hbase/WALs/slave2,60020,1430201462672/slave2%2C60020%2C1430201462672.1430215999727
at position: 83

Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455


Re: Cluster replication is not replicating any data

2015-04-28 Thread Chandrashekhar Kotekar
Solved the problem. I recopied hbase-site.xml file from master server to
region servers which contained 'hbase.replication' property, restarted both
the clusters and replication started working. Looks like updated file
didn't get copied into region servers.


Thanks a lot for your pointers..


Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455

On Tue, Apr 28, 2015 at 5:11 PM, Dejan Menges dejan.men...@gmail.com
wrote:

 Maybe you missed some steps - I did it bunch of times following these steps
 here:

 http://hbase.apache.org/book.html#_cluster_replication

 On Tue, Apr 28, 2015 at 1:39 PM Chandrashekhar Kotekar 
 shekhar.kote...@gmail.com wrote:

  Yes. When replication did not worked then I copied 4 rows which were
  already present in primary cluster using copyTables program. Those 4 rows
  properly got copied into target cluster.
 
 
  Regards,
  Chandrash3khar Kotekar
  Mobile - +91 8600011455
 
  On Tue, Apr 28, 2015 at 4:48 PM, Dejan Menges dejan.men...@gmail.com
  wrote:
 
   Hi,
  
   Did you copy the table that you want to replicate first to destination
   cluster?
  
   Thanks,
   Dejan
  
   On Tue, Apr 28, 2015 at 1:06 PM Chandrashekhar Kotekar 
   shekhar.kote...@gmail.com wrote:
  
Hi,
   
I have setup cluster replication between 2 clusters but data is not
   getting
copied. Can anyone please help me in cluster replication?
   
I would like to know if I add only 1 row in one of the tables then
 will
that row get replicated in other cluster or HBase waits for some time
  or
HBase waits before certain amount of data is added/deleted/edited
 (like
64MB)?
   
Clusters are based on CDH 5.3.1 which have 0.98.6 version of HBase. I
   have
named slave nodes of both the clusters as slave1 and slave2 and
 master
server is named as hbase-master (This setting is in /etc/hosts file).
   
I took following steps to setup replication:
1) Set value of 'hbase.replication' property in 'hbase-site.xml' file
  to
true in both the clusters
   
2) Restarted HBase master and all region servers of both the
 clusters.
(Restarting needs to be done in any specifi order?? like master first
   then
region server or vice versa??)
   
3) Disabled, Altered all the tables in both the clusters by adding
  {NAME
= 'CF', REPLICATION_SCOPE = 1} in alter statement, and then
 enabled
tables one by one. (CF is actual column family name)
   
4) Added peer using add_peer '1', '10.0.21.249:2181:/hbase'  in
   primary
cluster.
   
5) added peer using add_peer 'nc' '10.0.21.111:/hbase ' in target
cluster.
   
6) Primary cluster already has 4 rows. Added one more row having one
   column
for COMPANY_TABLE into primary cluster. So expecting either total 5
  rows
   in
target cluster or at least newly added row in target cluster. Waited
  few
minutes for row to get replicated into other cluster. Executed scan
'table_name' command in target cluster but didnt get that row from
   primary
cluster.
   
7) Executed following verify replication command in primary cluster.
  This
command runs m-r job and shows 4 good rows but in target cluster no
 row
present for COMPANY_TABLE table.
   
hbase org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication
--families=CF nc COMPANY_TABLE
   
   
There's no error in region server logs. Sample logs from primary
   cluster's
region server is as as below:
   
15/04/28 10:48:16 INFO regionserver.HRegionServer: Adding moved
 region
record: 1a685b27708dfea86bb2d8a9ca1bceb5 to
ip-10-0-21-90.ec2.internal,60020,1430217861644:60020 as of 4873480
15/04/28 10:51:06 INFO regionserver.Replication: Normal source for
   cluster
nc: Total replicated edits: 0, currently replicating from:
   
   
  
 
 hdfs://StagingCluster/hbase/WALs/slave2,60020,1430201462672/slave2%2C60020%2C1430201462672.1430215999727
at position: 83
   
15/04/28 10:56:06 INFO regionserver.Replication: Normal source for
   cluster
nc: Total replicated edits: 0, currently replicating from:
   
   
  
 
 hdfs://StagingCluster/hbase/WALs/slave2,60020,1430201462672/slave2%2C60020%2C1430201462672.1430215999727
at position: 83
   
15/04/28 11:01:06 INFO regionserver.Replication: Normal source for
   cluster
nc: Total replicated edits: 0, currently replicating from:
   
   
  
 
 hdfs://StagingCluster/hbase/WALs/slave2,60020,1430201462672/slave2%2C60020%2C1430201462672.1430215999727
at position: 83
   
Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455
   
  
 



Re: InvocationTargetException exception from org.apache.hadoop.hbase.client.HConnectionManager.createConnection

2015-03-03 Thread Chandrashekhar Kotekar
Hi JM,

Thanks for the answer. My code was missing hdfs-site.xml. I added this
file as well while creating configuration object and error was gone but
some other errors came which I solved by putting proper jar files on the
class path.

I had to use hdfs-site.xml in configuration because my Hadoop cluster is
using name node HA feature. Strange thing is that I was not able to get
this information anywhere. After lot of trial and error I found out that
hdfs-site.xml file is also required.




Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455

On Tue, Mar 3, 2015 at 9:59 PM, Jean-Marc Spaggiari jean-m...@spaggiari.org
 wrote:

 Hi Chandrashekhar,

 Can you make sure your hbase-site.xml is into the classpath and remove the
 addResouce line from your code?

 JM

 2015-03-03 0:07 GMT-05:00 Chandrashekhar Kotekar 
 shekhar.kote...@gmail.com
 :

  My tomcat based REST API application is not able to process request due
 to
  above mentioned error. I have tried following things so far :
 
 1. checking if all the jar files are available or not
 2. Checking permissions on all files present in tomcat/webapp/
 directory
 3. firewall rules
 4. Hbase is availabe or not
 
  but then also getting following exception. I am using CDH 5.3.1 which
  contains HBase 0.98.6. Does anyone know how to resolve this issue?
 
 
  2015-03-03 05:09:02 privateLog [ERROR]
  java.lang.reflect.InvocationTargetException
 
 
 org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:413)
 
 
 
 org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:306)
  com.amazon.dao.MyDAO.clinit(SensorDataDAO.java:78)
 
  sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 
 
 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
 
 
 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 
  java.lang.reflect.Constructor.newInstance(Constructor.java:526)
 
  org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:126)
 
 
 
 org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:74)
 
 
 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:958)
 
 
 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:911)
 
 
 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:485)
 
 
 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456)
 
 
 
 org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:291)
 
 
 
 org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
 
 
 
 org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:288)
 
 
 
 org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:190)
 
 
 
 org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:580)
 
 
 
 org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:895)
 
 
 
 org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:425)
 
 
 
 org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:276)
 
 
 
 org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:197)
 
 
 
 org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:47)
 
 
 
 org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4779)
 
 
 
 org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5273)
 
  org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
 
 
 
 org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:895)
 
  org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:871)
 
  org.apache.catalina.core.StandardHost.addChild(StandardHost.java:615)
 
  org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:958)
 
 
 org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1599)
 
  java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 
  java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
  java.util.concurrent.FutureTask.run(FutureTask.java:166)
 
 
 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 
 
 
 java.util.concurrent.ThreadPoolExecutor

InvocationTargetException exception from org.apache.hadoop.hbase.client.HConnectionManager.createConnection

2015-03-02 Thread Chandrashekhar Kotekar
My tomcat based REST API application is not able to process request due to
above mentioned error. I have tried following things so far :

   1. checking if all the jar files are available or not
   2. Checking permissions on all files present in tomcat/webapp/ directory
   3. firewall rules
   4. Hbase is availabe or not

but then also getting following exception. I am using CDH 5.3.1 which
contains HBase 0.98.6. Does anyone know how to resolve this issue?


2015-03-03 05:09:02 privateLog [ERROR]
java.lang.reflect.InvocationTargetException
org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:413)

org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:306)
com.amazon.dao.MyDAO.clinit(SensorDataDAO.java:78)

sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)

sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
java.lang.reflect.Constructor.newInstance(Constructor.java:526)

org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:126)

org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:74)

org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:958)

org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:911)

org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:485)

org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456)

org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:291)

org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)

org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:288)

org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:190)

org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:580)

org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:895)

org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:425)

org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:276)

org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:197)

org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:47)

org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4779)

org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5273)

org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)

org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:895)

org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:871)

org.apache.catalina.core.StandardHost.addChild(StandardHost.java:615)

org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:958)

org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1599)

java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)

java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
java.util.concurrent.FutureTask.run(FutureTask.java:166)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:724)
com.amazon.dao.MyDAO  clinit


Code which tries to establish connection is as follows:

public class MyDAO {

  protected static HConnection connection;


  static {
  Configuration conf = HBaseConfiguration.create();
  conf.addResource(hbase-site.xml);
  connection = HConnectionManager.createConnection(conf);
  // connection object is still null at this point
try {
} catch (Exception ex) {
  ex.printStackTrace();
}
  }
}


Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455


Re: Storing Json format in Hbase

2015-01-04 Thread Chandrashekhar Kotekar
You can convert xml to json using map-reduce program and then store json
into HBase but you need to decide what should be your row key.

Another point you have to take into account is that if you want to search
anything inside json or not. If you want to search inside json then HBase
won't be best option for you. Probably you can switch to MongoDB or some
other document store.

Hope it helps...

Regards,
Chandrashekhar
On 04-Jan-2015 3:32 PM, Shashidhar Rao raoshashidhar...@gmail.com wrote:

 Hi,

 Can someone guide me if the solution I am proposing is a feasible option or
 not

 1. Large xml data is delivered through external system.
 2. Convert these into json format.
 3. Store it into HBASE ,even though there will be hardly any updates , only
 retrieval. I have looked at Hive but finally had to decide against it as
 retrieval would be slow.
 4. Need to use Hadoop Nosql as other components are all using Hadoop
 ecosystem.

 Can xml data be directly stored into Hbase without any
 transformation.(second question)

 Any suggestions on storing xml data on Nosql. (only open source and no
 commercial nosql)

 Thanks in advance

 Shashi



Re: Hello!

2014-11-02 Thread Chandrashekhar Kotekar
Hello jackie.. looks like u have joined mailing list just now :D


Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455

On Sun, Nov 2, 2014 at 6:52 PM, jackie jackiehbaseu...@126.com wrote:

 Hello!


Re: Could not resolve the DNS name of slave2:60020

2014-08-01 Thread Chandrashekhar Kotekar
Thanks a lot for your help :)


Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455


On Fri, Aug 1, 2014 at 11:46 AM, Chandrashekhar Kotekar 
shekhar.kote...@gmail.com wrote:

 Not sure if client will approve the upgrade.. :(


 Regards,
 Chandrash3khar Kotekar
 Mobile - +91 8600011455


 On Thu, Jul 31, 2014 at 8:33 PM, Jean-Marc Spaggiari 
 jean-m...@spaggiari.org wrote:

 Oh. 0.90.6 is VERY old. Any chance for you to upgrade a more recent
 version
 like CDH4 or even CDH5 (HBase 0.98)?


 2014-07-31 3:25 GMT-04:00 Chandrashekhar Kotekar 
 shekhar.kote...@gmail.com
 :

  No, we are using hbase-0.90.6-cdh3u6
 
 
  Regards,
  Chandrash3khar Kotekar
  Mobile - +91 8600011455
 
 
  On Thu, Jul 31, 2014 at 12:49 PM, Qiang Tian tian...@gmail.com wrote:
 
   see https://issues.apache.org/jira/browse/HBASE-3556
   it looks you are using a very old release? 0.90 perhaps?
  
  
  
   On Thu, Jul 31, 2014 at 2:24 PM, Chandrashekhar Kotekar 
   shekhar.kote...@gmail.com wrote:
  
This is how /etc/hosts file looks like on HBase master node
   
ubuntu@master:~$ cat /etc/hosts
10.78.21.133 master
#10.62.126.245 slave1
#10.154.133.161 slave1
10.224.115.218 slave1
10.32.213.195 slave2
   
and the code which actually tries to connect is as shown below:
   
hTableInterface=hTablePool.getTable(POINTS_TABLE);
   
   
   
Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455
   
   
On Wed, Jul 30, 2014 at 6:43 PM, Jean-Marc Spaggiari 
jean-m...@spaggiari.org wrote:
   
 Hi Chandrash,

 What do you have in your /etc/hosts? Can you also share the piece
 of
   code
 where you are doing the connection to HBase?

 Thanks,

 JM


 2014-07-30 7:34 GMT-04:00 Chandrashekhar Kotekar 
 shekhar.kote...@gmail.com
 :

  I have a HBase cluster on AWS. I have written few REST services
  which
are
  supposed to connect to this HBase cluster and get some data.
 
  My configuration is as below :
 
 1. Java code, eclipse, tomcat running on my desktop
 2. HBase cluster, Hadoop cluster sitting on AWS
 3. Can connect to HBase cluster, Hadoop cluster ONLY THROUGH
 VPN
 
  Whenever web service tries to do ANY operation on HBase, it
 throws
Could
  not resolve the DNS name of slave2:60020 error with following
 stack
 trace.
 
 
  java.lang.IllegalArgumentException: Could not resolve the DNS
 name
  of
  slave2:60020
  at
 

   
  
 
 org.apache.hadoop.hbase.HServerAddress.checkBindAddressCanBeResolved(HServerAddress.java:105)
  at
 
  org.apache.hadoop.hbase.HServerAddress.init(HServerAddress.java:66)
  at
 

   
  
 
 org.apache.hadoop.hbase.zookeeper.RootRegionTracker.dataToHServerAddress(RootRegionTracker.java:82)
  at
 

   
  
 
 org.apache.hadoop.hbase.zookeeper.RootRegionTracker.waitRootRegionLocation(RootRegionTracker.java:73)
  at
 

   
  
 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:578)
  at
 

   
  
 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:558)
  at
 

   
  
 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:687)
  at
 

   
  
 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:589)
 
   at
 

   
  
 
 org.apache.hadoop.hbase.client.HTablePool.createHTable(HTablePool.java:129)
  at
 
   org.apache.hadoop.hbase.client.HTablePool.getTable(HTablePool.java:96)
  at
   com.shekhar.dao.AdminDAO.getAdminInfoByRowKey(AdminDAO.java:63)
  at

 com.shekhar.auth.Authorization.isSystemAdmin(Authorization.java:41)
  at
 

   
  
 
 com.shekhar.business.ReadingProcessor.getReadingInfo(ReadingProcessor.java:310)
  at
 
   
  com.shekhar.services.ReadingService.getReadings(ReadingService.java:543)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
 Method)
 
 
 
 
  In hbase-site.xml file I have given IP address of HBase master
 and
  in
  core-site.xml file IP address of namenode is given.
 
  Has anyone faced this type of problem? Why this problem arises?
 
  I have posted same question on Stack overflow but no one replied
   hence
  posting question here.
 
  Request you to please help.
 
 
 
  Regards,
  Chandrash3khar Kotekar
  Mobile - +91 8600011455
 

   
  
 





Re: Could not resolve the DNS name of slave2:60020

2014-08-01 Thread Chandrashekhar Kotekar
Not sure if client will approve the upgrade.. :(


Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455


On Thu, Jul 31, 2014 at 8:33 PM, Jean-Marc Spaggiari 
jean-m...@spaggiari.org wrote:

 Oh. 0.90.6 is VERY old. Any chance for you to upgrade a more recent version
 like CDH4 or even CDH5 (HBase 0.98)?


 2014-07-31 3:25 GMT-04:00 Chandrashekhar Kotekar 
 shekhar.kote...@gmail.com
 :

  No, we are using hbase-0.90.6-cdh3u6
 
 
  Regards,
  Chandrash3khar Kotekar
  Mobile - +91 8600011455
 
 
  On Thu, Jul 31, 2014 at 12:49 PM, Qiang Tian tian...@gmail.com wrote:
 
   see https://issues.apache.org/jira/browse/HBASE-3556
   it looks you are using a very old release? 0.90 perhaps?
  
  
  
   On Thu, Jul 31, 2014 at 2:24 PM, Chandrashekhar Kotekar 
   shekhar.kote...@gmail.com wrote:
  
This is how /etc/hosts file looks like on HBase master node
   
ubuntu@master:~$ cat /etc/hosts
10.78.21.133 master
#10.62.126.245 slave1
#10.154.133.161 slave1
10.224.115.218 slave1
10.32.213.195 slave2
   
and the code which actually tries to connect is as shown below:
   
hTableInterface=hTablePool.getTable(POINTS_TABLE);
   
   
   
Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455
   
   
On Wed, Jul 30, 2014 at 6:43 PM, Jean-Marc Spaggiari 
jean-m...@spaggiari.org wrote:
   
 Hi Chandrash,

 What do you have in your /etc/hosts? Can you also share the piece
 of
   code
 where you are doing the connection to HBase?

 Thanks,

 JM


 2014-07-30 7:34 GMT-04:00 Chandrashekhar Kotekar 
 shekhar.kote...@gmail.com
 :

  I have a HBase cluster on AWS. I have written few REST services
  which
are
  supposed to connect to this HBase cluster and get some data.
 
  My configuration is as below :
 
 1. Java code, eclipse, tomcat running on my desktop
 2. HBase cluster, Hadoop cluster sitting on AWS
 3. Can connect to HBase cluster, Hadoop cluster ONLY THROUGH
 VPN
 
  Whenever web service tries to do ANY operation on HBase, it
 throws
Could
  not resolve the DNS name of slave2:60020 error with following
 stack
 trace.
 
 
  java.lang.IllegalArgumentException: Could not resolve the DNS
 name
  of
  slave2:60020
  at
 

   
  
 
 org.apache.hadoop.hbase.HServerAddress.checkBindAddressCanBeResolved(HServerAddress.java:105)
  at
 
  org.apache.hadoop.hbase.HServerAddress.init(HServerAddress.java:66)
  at
 

   
  
 
 org.apache.hadoop.hbase.zookeeper.RootRegionTracker.dataToHServerAddress(RootRegionTracker.java:82)
  at
 

   
  
 
 org.apache.hadoop.hbase.zookeeper.RootRegionTracker.waitRootRegionLocation(RootRegionTracker.java:73)
  at
 

   
  
 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:578)
  at
 

   
  
 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:558)
  at
 

   
  
 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:687)
  at
 

   
  
 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:589)
 
   at
 

   
  
 
 org.apache.hadoop.hbase.client.HTablePool.createHTable(HTablePool.java:129)
  at
 
   org.apache.hadoop.hbase.client.HTablePool.getTable(HTablePool.java:96)
  at
   com.shekhar.dao.AdminDAO.getAdminInfoByRowKey(AdminDAO.java:63)
  at
 com.shekhar.auth.Authorization.isSystemAdmin(Authorization.java:41)
  at
 

   
  
 
 com.shekhar.business.ReadingProcessor.getReadingInfo(ReadingProcessor.java:310)
  at
 
   
  com.shekhar.services.ReadingService.getReadings(ReadingService.java:543)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
 Method)
 
 
 
 
  In hbase-site.xml file I have given IP address of HBase master
 and
  in
  core-site.xml file IP address of namenode is given.
 
  Has anyone faced this type of problem? Why this problem arises?
 
  I have posted same question on Stack overflow but no one replied
   hence
  posting question here.
 
  Request you to please help.
 
 
 
  Regards,
  Chandrash3khar Kotekar
  Mobile - +91 8600011455
 

   
  
 



Re: Could not resolve the DNS name of slave2:60020

2014-07-31 Thread Chandrashekhar Kotekar
This is how /etc/hosts file looks like on HBase master node

ubuntu@master:~$ cat /etc/hosts
10.78.21.133 master
#10.62.126.245 slave1
#10.154.133.161 slave1
10.224.115.218 slave1
10.32.213.195 slave2

and the code which actually tries to connect is as shown below:

hTableInterface=hTablePool.getTable(POINTS_TABLE);



Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455


On Wed, Jul 30, 2014 at 6:43 PM, Jean-Marc Spaggiari 
jean-m...@spaggiari.org wrote:

 Hi Chandrash,

 What do you have in your /etc/hosts? Can you also share the piece of code
 where you are doing the connection to HBase?

 Thanks,

 JM


 2014-07-30 7:34 GMT-04:00 Chandrashekhar Kotekar 
 shekhar.kote...@gmail.com
 :

  I have a HBase cluster on AWS. I have written few REST services which are
  supposed to connect to this HBase cluster and get some data.
 
  My configuration is as below :
 
 1. Java code, eclipse, tomcat running on my desktop
 2. HBase cluster, Hadoop cluster sitting on AWS
 3. Can connect to HBase cluster, Hadoop cluster ONLY THROUGH VPN
 
  Whenever web service tries to do ANY operation on HBase, it throws Could
  not resolve the DNS name of slave2:60020 error with following stack
 trace.
 
 
  java.lang.IllegalArgumentException: Could not resolve the DNS name of
  slave2:60020
  at
 
 org.apache.hadoop.hbase.HServerAddress.checkBindAddressCanBeResolved(HServerAddress.java:105)
  at
  org.apache.hadoop.hbase.HServerAddress.init(HServerAddress.java:66)
  at
 
 org.apache.hadoop.hbase.zookeeper.RootRegionTracker.dataToHServerAddress(RootRegionTracker.java:82)
  at
 
 org.apache.hadoop.hbase.zookeeper.RootRegionTracker.waitRootRegionLocation(RootRegionTracker.java:73)
  at
 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:578)
  at
 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:558)
  at
 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:687)
  at
 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:589)
 
   at
 
 org.apache.hadoop.hbase.client.HTablePool.createHTable(HTablePool.java:129)
  at
  org.apache.hadoop.hbase.client.HTablePool.getTable(HTablePool.java:96)
  at com.shekhar.dao.AdminDAO.getAdminInfoByRowKey(AdminDAO.java:63)
  at
 com.shekhar.auth.Authorization.isSystemAdmin(Authorization.java:41)
  at
 
 com.shekhar.business.ReadingProcessor.getReadingInfo(ReadingProcessor.java:310)
  at
  com.shekhar.services.ReadingService.getReadings(ReadingService.java:543)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 
 
 
 
  In hbase-site.xml file I have given IP address of HBase master and in
  core-site.xml file IP address of namenode is given.
 
  Has anyone faced this type of problem? Why this problem arises?
 
  I have posted same question on Stack overflow but no one replied hence
  posting question here.
 
  Request you to please help.
 
 
 
  Regards,
  Chandrash3khar Kotekar
  Mobile - +91 8600011455
 



Re: Could not resolve the DNS name of slave2:60020

2014-07-31 Thread Chandrashekhar Kotekar
No, we are using hbase-0.90.6-cdh3u6


Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455


On Thu, Jul 31, 2014 at 12:49 PM, Qiang Tian tian...@gmail.com wrote:

 see https://issues.apache.org/jira/browse/HBASE-3556
 it looks you are using a very old release? 0.90 perhaps?



 On Thu, Jul 31, 2014 at 2:24 PM, Chandrashekhar Kotekar 
 shekhar.kote...@gmail.com wrote:

  This is how /etc/hosts file looks like on HBase master node
 
  ubuntu@master:~$ cat /etc/hosts
  10.78.21.133 master
  #10.62.126.245 slave1
  #10.154.133.161 slave1
  10.224.115.218 slave1
  10.32.213.195 slave2
 
  and the code which actually tries to connect is as shown below:
 
  hTableInterface=hTablePool.getTable(POINTS_TABLE);
 
 
 
  Regards,
  Chandrash3khar Kotekar
  Mobile - +91 8600011455
 
 
  On Wed, Jul 30, 2014 at 6:43 PM, Jean-Marc Spaggiari 
  jean-m...@spaggiari.org wrote:
 
   Hi Chandrash,
  
   What do you have in your /etc/hosts? Can you also share the piece of
 code
   where you are doing the connection to HBase?
  
   Thanks,
  
   JM
  
  
   2014-07-30 7:34 GMT-04:00 Chandrashekhar Kotekar 
   shekhar.kote...@gmail.com
   :
  
I have a HBase cluster on AWS. I have written few REST services which
  are
supposed to connect to this HBase cluster and get some data.
   
My configuration is as below :
   
   1. Java code, eclipse, tomcat running on my desktop
   2. HBase cluster, Hadoop cluster sitting on AWS
   3. Can connect to HBase cluster, Hadoop cluster ONLY THROUGH VPN
   
Whenever web service tries to do ANY operation on HBase, it throws
  Could
not resolve the DNS name of slave2:60020 error with following stack
   trace.
   
   
java.lang.IllegalArgumentException: Could not resolve the DNS name of
slave2:60020
at
   
  
 
 org.apache.hadoop.hbase.HServerAddress.checkBindAddressCanBeResolved(HServerAddress.java:105)
at
org.apache.hadoop.hbase.HServerAddress.init(HServerAddress.java:66)
at
   
  
 
 org.apache.hadoop.hbase.zookeeper.RootRegionTracker.dataToHServerAddress(RootRegionTracker.java:82)
at
   
  
 
 org.apache.hadoop.hbase.zookeeper.RootRegionTracker.waitRootRegionLocation(RootRegionTracker.java:73)
at
   
  
 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:578)
at
   
  
 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:558)
at
   
  
 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:687)
at
   
  
 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:589)
   
 at
   
  
 
 org.apache.hadoop.hbase.client.HTablePool.createHTable(HTablePool.java:129)
at
   
 org.apache.hadoop.hbase.client.HTablePool.getTable(HTablePool.java:96)
at
 com.shekhar.dao.AdminDAO.getAdminInfoByRowKey(AdminDAO.java:63)
at
   com.shekhar.auth.Authorization.isSystemAdmin(Authorization.java:41)
at
   
  
 
 com.shekhar.business.ReadingProcessor.getReadingInfo(ReadingProcessor.java:310)
at
   
  com.shekhar.services.ReadingService.getReadings(ReadingService.java:543)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   
   
   
   
In hbase-site.xml file I have given IP address of HBase master and in
core-site.xml file IP address of namenode is given.
   
Has anyone faced this type of problem? Why this problem arises?
   
I have posted same question on Stack overflow but no one replied
 hence
posting question here.
   
Request you to please help.
   
   
   
Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455
   
  
 



Could not resolve the DNS name of slave2:60020

2014-07-30 Thread Chandrashekhar Kotekar
I have a HBase cluster on AWS. I have written few REST services which are
supposed to connect to this HBase cluster and get some data.

My configuration is as below :

   1. Java code, eclipse, tomcat running on my desktop
   2. HBase cluster, Hadoop cluster sitting on AWS
   3. Can connect to HBase cluster, Hadoop cluster ONLY THROUGH VPN

Whenever web service tries to do ANY operation on HBase, it throws Could
not resolve the DNS name of slave2:60020 error with following stack trace.


java.lang.IllegalArgumentException: Could not resolve the DNS name of
slave2:60020
at 
org.apache.hadoop.hbase.HServerAddress.checkBindAddressCanBeResolved(HServerAddress.java:105)
at org.apache.hadoop.hbase.HServerAddress.init(HServerAddress.java:66)
at 
org.apache.hadoop.hbase.zookeeper.RootRegionTracker.dataToHServerAddress(RootRegionTracker.java:82)
at 
org.apache.hadoop.hbase.zookeeper.RootRegionTracker.waitRootRegionLocation(RootRegionTracker.java:73)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:578)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:558)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:687)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:589)

 at org.apache.hadoop.hbase.client.HTablePool.createHTable(HTablePool.java:129)
at org.apache.hadoop.hbase.client.HTablePool.getTable(HTablePool.java:96)
at com.shekhar.dao.AdminDAO.getAdminInfoByRowKey(AdminDAO.java:63)
at com.shekhar.auth.Authorization.isSystemAdmin(Authorization.java:41)
at 
com.shekhar.business.ReadingProcessor.getReadingInfo(ReadingProcessor.java:310)
at com.shekhar.services.ReadingService.getReadings(ReadingService.java:543)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)




In hbase-site.xml file I have given IP address of HBase master and in
core-site.xml file IP address of namenode is given.

Has anyone faced this type of problem? Why this problem arises?

I have posted same question on Stack overflow but no one replied hence
posting question here.

Request you to please help.



Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455