saintstack commented on a change in pull request #1164: HBASE-23331: 
Documentation for HBASE-18095
URL: https://github.com/apache/hbase/pull/1164#discussion_r378054451
 
 

 ##########
 File path: 
dev-support/design-docs/HBASE-18095-Zookeeper-less-client-connection-design.adoc
 ##########
 @@ -0,0 +1,112 @@
+////
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+////
+
+= HBASE-18095: Zookeeper-less client connection
+
+
+== Context
+Currently, Zookeeper (ZK) lies in the critical code path of connection init. 
To set up a connection to a given HBase cluster, client relies on the zookeeper 
quorum configured in the client hbase-site.xml and attempts to fetch the 
following information.
+
+* ClusterID
+* Active HMaster server name
+* Meta table region locations
+
+ZK is deemed the source of truth since other processes that maintain the 
cluster state persist the changes to this data into ZK. So it is an obvious 
place to look at for clients to fetch the latest cluster state.  However this 
comes with it’s own set of problems, some of them are below.
+
+* Timeouts and retry logic for ZK clients are managed separately from HBase 
configuration. This is more administration overhead for end users (example: 
multiple timeouts are to be configured for different types of RPCs. 
client->master, client->ZK etc.). This prevents HBase from having a single 
holistic timeout configuration that applies to any RPCs.
+* If there is any issue with ZK (like connection overload / timeouts), the 
entire HBase service appears frozen and there is little visibility into it.
+* Exposing zookeeper to all the clients can be risky since it can potentially 
be abused to DDOS.
+* Embedded ZK client is bundled with hbase client jar as a dependency (with 
it’s log spew :-]).
+
+== Goal
+
+We would like to remove this ZK dependency in the HBase client and instead 
have the clients query a preconfigured list of active and standby master 
host:port addresses. This brings all the client interactions with HBase under 
the same RPC framework that is holistically controlled by a set of hbase client 
configuration parameters. This also alleviates the pressure on ZK cluster which 
is critical from an operational standpoint as some core processes like 
replication, log splitting, master election etc. depend on it. The next section 
describes the kind of changes needed on both server and client side to support 
this behavior.
+
+== Proposed design
+
+As mentioned above, clients now get a pre configured list active and standby 
master addresses that they can query to fetch the meta information needed for 
connection setup. Something like,
+
+[source, xml]
+-----
+<property>
+  <name>hbase.masters</name>
+  <value>master1:16000,master2:16001,master3:16000</value>
+</property>
+-----
+
+Clients should be robust enough to handle configuration changes to this 
parameter since master hosts can change (added/removed) over time and not every 
client can afford a restart.
+
+One thing to note here is that having masters in the init/read/write path for 
clients means that
+
+* At Least one active/standby master is now needed for connection creation. 
Earlier this was not a requirement because the clients looked up the cluster ID 
from the relevant znode and init successfully. So, technically a master need 
not be around to create a connection to the cluster.
+* Masters are now active part of read write path in client life cycle under 
certain scenarios. If the client  cache of meta locations/active master is 
purged/stale, at least one master (active/stand-by) serving the latest 
information should exist. Earlier this information was served by ZK and clients 
look up the latest cluster ID/active master/meta locations from the relevant 
znodes and get going.
+* There is a higher connection load on the masters than before.
+* More state synchronization traffic (see below)
+
+End users should factor these requirements into their cluster deployment if 
they intend to use this feature.
+
+=== Server side changes
+
+Now that the master end points are considered as source of truth for clients, 
they should track the latest meta information for clusterID, active master and 
meta table locations. Since the clients can connect to any master end point 
(details below), all the masters (active/standby) now track all the relevant 
meta information. The idea is to implement an in-memory cache local to all the 
masters and it should keep up with changes to this metadata. This is tracked in 
the following jiras.
 
 Review comment:
   s/they should/they/

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to