[ 
https://issues.apache.org/jira/browse/HBASE-18095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17113440#comment-17113440
 ] 

Bharath Vissapragada commented on HBASE-18095:
----------------------------------------------

> Maybe we could add an option called fallbackToZk? 

Implementation wise, I think it's pretty simple to do that. We can have a 
"compound" registry implementation that delegates to primary and secondary 
registries but yes if all the masters are down and ZK servers are available, 
the cluster is already in a bad shape. I think there is very little a client 
could with the connection.

That said, after following the discussion here and in HBASE-11288, I think what 
the current registry implementations lack (out of the box) is some sort of 
dynamic resolver interface and re-configuration on the fly. Currently I think 
everyone is exploiting their infrastructure specific solutions (DNS resolution, 
lvs etc) to trick the rpcs to be load balanced. Was wondering if we can 
abstract some of that out and make it pluggable in the client code. 

For example, a simple interface like

{code:java}
interface RegistryEndPointResolver {
  Set<String> getRegistryEndPoints();
}
{code}

The default implementation would be based on the hbase-site.xml config and 
polled every few seconds. But every one can plug their own implementation into 
it based on say, DNS, existing service discovery solutions etc. Thoughts?


> Provide an option for clients to find the server hosting META that does not 
> involve the ZooKeeper client
> --------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-18095
>                 URL: https://issues.apache.org/jira/browse/HBASE-18095
>             Project: HBase
>          Issue Type: New Feature
>          Components: Client
>            Reporter: Andrew Kyle Purtell
>            Assignee: Bharath Vissapragada
>            Priority: Major
>             Fix For: 3.0.0-alpha-1, 2.3.0
>
>         Attachments: HBASE-18095.master-v1.patch, HBASE-18095.master-v2.patch
>
>
> Clients are required to connect to ZooKeeper to find the location of the 
> regionserver hosting the meta table region. Site configuration provides the 
> client a list of ZK quorum peers and the client uses an embedded ZK client to 
> query meta location. Timeouts and retry behavior of this embedded ZK client 
> are managed orthogonally to HBase layer settings and in some cases the ZK 
> cannot manage what in theory the HBase client can, i.e. fail fast upon outage 
> or network partition.
> We should consider new configuration settings that provide a list of 
> well-known master and backup master locations, and with this information the 
> client can contact any of the master processes directly. Any master in either 
> active or passive state will track meta location and respond to requests for 
> it with its cached last known location. If this location is stale, the client 
> can ask again with a flag set that requests the master refresh its location 
> cache and return the up-to-date location. Every client interaction with the 
> cluster thus uses only HBase RPC as transport, with appropriate settings 
> applied to the connection. The configuration toggle that enables this 
> alternative meta location lookup should be false by default.
> This removes the requirement that HBase clients embed the ZK client and 
> contact the ZK service directly at the beginning of the connection lifecycle. 
> This has several benefits. ZK service need not be exposed to clients, and 
> their potential abuse, yet no benefit ZK provides the HBase server cluster is 
> compromised. Normalizing HBase client and ZK client timeout settings and 
> retry behavior - in some cases, impossible, i.e. for fail-fast - is no longer 
> necessary. 
> And, from [~ghelmling]: There is an additional complication here for 
> token-based authentication. When a delegation token is used for SASL 
> authentication, the client uses the cluster ID obtained from Zookeeper to 
> select the token identifier to use. So there would also need to be some 
> Zookeeper-less, unauthenticated way to obtain the cluster ID as well. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to