[Joe]: For reliability purpose, I suggest that the keystone client should 
provide a fail-safe design: primary KeyStone server, the second KeyStone server 
(or even the third KeySont server) . If the primary KeyStone server is out of 
service, then the KeyStone client will try the second KeyStone server. 
Different KeyStone client may be configured with different primary KeyStone 
server and the second KeyStone server.

[Adam]: Makes sense, but that can be handled outside of Keystone using HA and 
Heartbear and awhole slew of technologies.  Each Keystone server can validate 
each other's tokens.
For cross-site KeyStone HA, the backend of HA can leverage MySQL Galera cluster 
for multisite database synchronous replication to provide high availability, 
but for the KeyStone front-end the API server, it's web service and accessed 
through the endpoint address ( name, or domain name, or ip address ) , like 
http://.... or ip address.

AFAIK, the HA for web service will usually be done through DNS based geo-load 
balancer in multi-site scenario. The shortcoming for this HA is that the fault 
recovery ( forward request to the health web service) will take longer time, 
it's up to the configuration in the DNS system. The other way is to put a load 
balancer like LVS ahead of KeyStone web services in multi-site. Then either the 
LVS is put in one site(so that KeyStone client only configured with one IP 
address based endpoint item, but LVS cross-site HA is lack), or in multisite 
site, and register the multi-LVS's IP to the DNS or Name server(so that 
KeyStone client only configured with one Domain name or name based endpoint 
item, same issue just mentioned).

Therefore, I still think that keystone client with a fail-safe design( primary 
KeyStone server, the second KeyStone server ) will be a "very high gain but low 
invest" multisite high availability solution. Just like MySQL itself, we know 
there is some outbound high availability solution (for example, 
PaceMaker+ColoSync+DRDB), but also there is  Galera like inbound cluster ware.

Best Regards
Chaoyi Huang ( Joe Huang )


From: Adam Young [mailto:ayo...@redhat.com]
Sent: Tuesday, March 17, 2015 10:00 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [opnfv-tech-discuss] [Keystone][Multisite] Huge 
token size

On 03/17/2015 02:51 AM, joehuang wrote:
It's not reality to deploy KeyStone service ( including backend store ) in each 
site if the number, for example, is more than 10.  The reason is that the 
stored data including data related to revocation need to be replicated to all 
sites in synchronization manner. Otherwise, the API server might attempt to use 
the token before it's able to be validated in the target site.

Replicating revocati9on data across 10 sites will be tricky, but far better 
than replicating all of the token data.  Revocations should be relatively rare.


When Fernet token is used in multisite scenario, each API request will ask for 
token validation from KeyStone. The cloud will be out of service if KeyStone 
stop working, therefore KeyStone service need to run in several sites.

There will be multiple Keystone servers, so each should talk to their local 
instance.


For reliability purpose, I suggest that the keystone client should provide a 
fail-safe design: primary KeyStone server, the second KeyStone server (or even 
the third KeySont server) . If the primary KeyStone server is out of service, 
then the KeyStone client will try the second KeyStone server. Different 
KeyStone client may be configured with different primary KeyStone server and 
the second KeyStone server.

Makes sense, but that can be handled outside of Keystone using HA and Heartbear 
and awhole slew of technologies.  Each Keystone server can validate each 
other's tokens.
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to