Hi all,

I just ran into the following issue:

I have an hbase cluster running in EC2 that was launched with the 0.20.4 EC2 
launch scripts. I'd like to connect to that cluster using an hbase client 
running locally (outside of EC2).
I pulled down my hbase-site.xml file from the master, which references the 
internal hostnames of the master and zookeeper nodes. I changed those to the 
public hostnames.
When running my client, it connects to the zk node without any problem; 
however, the zk node only knows about the internal hostnames of the region 
servers (IPs in the 10.193.x.x range).
Needless to say, I can't connect to the region servers at their 10.193 
addresses from my local machine, so my client doesn't work.

Does anyone know if there is a configuration flag I can set somewhere to get 
the public hostnames out of zookeeper instead of the private ones?

Best,

Patrick Salami - Senior Software Engineer
[ p. (858) 449-2241 e. [email protected] ]
Temboo Inc - 6935 West Bernardo Dr - San Diego, CA 92127

www.temboo.com

Reply via email to