HBase (daemons) try to use a single connection for themselves. A RS also does not need to mutate state in ZK to handle things like gets and puts.

Phoenix is probably the thing you need to look at more closely (especially if you're using an old version of Phoenix that matches the old HBase 1.1 version). Internally, Phoenix acts like an HBase client which results in a new ZK connection. There have certainly been bugs like that in the past (speaking generally, not specifically).

On 6/1/20 5:59 PM, anil gupta wrote:
Hi Folks,

We are running in HBase problems due to hitting the limit of ZK
connections. This cluster is running HBase 1.1.x and ZK 3.4.6.x on I3en ec2
instance type in AWS. Almost all our Region server are listed in zk logs
with "Too many connections from /<IP> - max is 60".
2020-06-01 21:42:08,375 - WARN  [NIOServerCxn.Factory:
0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@193] - Too many connections from
/<ip> - max is 60

  On a average each RegionServer has ~250 regions. We are also running
Phoenix on this cluster. Most of the queries are short range scans but
sometimes we are doing full table scans too.

   It seems like one of the simple fix is to increase maxClientCnxns
property in zoo.cfg to 300, 500, 700, etc. I will probably do that. But, i
am just curious to know In what scenarios these connections are
created/used(Scans/Puts/Delete or during other RegionServer operations)?
Are these also created by hbase clients/apps(my guess is NO)? How can i
calculate optimal value of maxClientCnxns for my cluster/usage?

Reply via email to