Hi,

For organizations with the keystone database shared across regions via
galera, do you just have keystone (and perhaps glance as was
suggested) in its own cluster that is multi-region, and the other
databases in a cluster that is only in one region (ie. just local
their their region)? Or are you giving other services their own
database in the single multi-region cluster and thus replicating all
the databases? Or is there another solution?

Thanks,
Curtis.

On Tue, Sep 8, 2015 at 3:22 PM, Jonathan Proulx <j...@jonproulx.com> wrote:
> Thanks Jay & Matt,
>
> That's basically what I thought, so I'll keep thinking it :)
>
> We're not replicating glance DB because images will be stored in
> different local Ceph storage on each side so the images won't be
> directly available.  We thought about moving back to a file back end
> and rsync'ing but RBD gets us lots of fun things we want to keep
> (quick start, copy on write thin cloned ephemeral storage etc...) so
> decided to live with making our users copy images around.
>
> -Jon
>
>
>
> On Tue, Sep 8, 2015 at 5:00 PM, Jay Pipes <jaypi...@gmail.com> wrote:
>> On 09/08/2015 04:44 PM, Jonathan Proulx wrote:
>>>
>>> Hi All,
>>>
>>> I'm pretty close to opening a second region in my cloud at a second
>>> physical location.
>>>
>>> The plan so far had been to only share keystone between the regions
>>> (nova, glance, cinder etc would be distinct) and implement this by
>>> using MariaDB with galera replication between sites with each site
>>> having it's own gmcast_segment to minimize the long distance catter
>>> plus a 3rd site with a galera arbitrator for the obvious reason.
>>
>>
>> I would also strongly consider adding the Glance registry database to the
>> same cross-WAN Galera cluster. At AT&T, we had such a setup for Keystone and
>> Glance registry databases at 10+ deployment zones across 6+ datacenters
>> across the nation. Besides adjusting the latency timeout for the Galera
>> settings, we made no other modifications to our
>> internal-to-an-availability-zone Nova database Galera cluster settings.
>>
>> The Keystone and Glance registry databases have a virtually identical read
>> and write data access pattern: small record/row size, small number of
>> INSERTs, virtually no UPDATE and DELETE calls, and heavy SELECT operations
>> on a small data set. This data access pattern is an ideal fit for a
>> WAN-replicated Galera cluster.
>>
>>> Today I was warned against using this in a multi writer setup. I'd planned
>>>   on one writer per physical location.
>>
>>
>> I don't know who warned you about this, but it's not an issue in the real
>> world. We ran in full multi-writer mode, with each deployment zone writing
>> to and reading from its nearest Galera cluster nodes. No issues.
>>
>> Best,
>> -jay
>>
>>> I had been under the impression this was the 'done thing' for
>>> geographically sepperate regions, was I wrong? Should I replicate just
>>> for DR and always pick a single possible remote write site?
>>>
>>> site to site link is 2x10G (different physical paths), short link is
>>> 2.2ms average latency (2.1ms low, 2.5ms high over 250 packets) long
>>> link shouldn't be much longer but isn't yet complete to test.
>>>
>>> -Jon
>>>
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



-- 
Twitter: @serverascode
Blog: serverascode.com

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to