Thank you Zane for the clarifications! I misunderstood #2 and that led to the other misunderstandings.
Further questions: * Are nested stacks aware of their nested-ness? In other words, given any nested stack (colocated with parent stack or not), can I trace it back to the parent stack? (On a possibly related note, I see that adopting a stack is an option to reassemble a new parent stack from its regional parts in the event that the old parent stack is lost.) * Has this design met the users' needs? In other words, are there any plans to make major modifications to this design? Thanks! On 9/1/15, 1:47 PM, "Zane Bitter" <zbit...@redhat.com> wrote: >On 01/09/15 11:41, Lowery, Mathew wrote: >> This is a Trove question but including Heat as they seem to have solved >> this problem. >> >> Summary: Today, it seems that Trove is not capable of creating a cluster >> spanning multiple regions. Is that the case and, if so, are there any >> plans to work on that? Also, are we aware of any precedent solutions >> (e.g. remote stacks in Heat) or even partially completed spec/code in >>Trove? >> >> More details: >> >> I found this nice diagram >> >><https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for >>_Heat/The_Missing_Diagram> created >> for Heat. As far as I understand it, > >Clarifications below... > >> #1 is the absence of multi-region >> support (i.e. what we have today). #2 seems to be a 100% client-based >> solution. In other words, the Heat services never know about the other >> stacks. > >I guess you could say that. > >> In fact, there is nothing tying these stacks together at all. > >I wouldn't go that far. The regional stacks still appear as resources in >their parent stack, so they're tied together by whatever inputs and >outputs are connected up in that stack. > >> #3 >> seems to show a "master" Heat server that understands "remote stacks" >> and simply converts those "remote stacks" into calls on regional Heats. >> I assume here the master stack record is stored by the master Heat. >> Because the "remote stacks" are full-fledged stacks, they can be managed >> by their regional Heats if availability of master or other regional >> Heats is lost. > >Yeah. > >> #4, the diagram doesn't seem to match the description >> (instead of one global Heat, it seems the diagram should show two >> regional Heats). > >It does (they're the two orange boxes). > >> In this one, a single arbitrary region becomes the >> owner of the stack and remote (low-level not stack) resources are >> created as needed. One problem is the manageability is lost if the Heat >> in the owning region is lost. Finally, #5. In #5, it's just #4 but with >> one and only one Heat. >> >> It seems like Heat solved this <https://review.openstack.org/#/c/53313/> >> using #3 (Master Orchestrator) > >No, we implemented #2. > >> but where there isn't necessarily a >> separate master Heat. Remote stacks can be created by any regional >>stack. > >Yeah, that was the difference between #3 and #2 :) > >cheers, >Zane. > >> Trove questions: >> >> 1. Having sub-clusters (aka remote clusters aka nested clusters) seems >> to be useful (i.e. manageability isn't lost when a region is lost). >> But then again, does it make sense to perform a cluster operation on >> a sub-cluster? >> 2. You could forego sub-clusters and just create full-fledged remote >> standalone Trove instances. >> 3. If you don't create full-fledged remote Trove instances (but instead >> just call remote Nova), then you cannot do simple things like >> getting logs from a node without going through the owning region's >> Trove. This is an extra hop and a single point of failure. >> 4. Even with sub-clusters, the only record of them being related lives >> only in the "owning" region. Then again, some ID tying them all >> together could be passed to the remote regions. >> 5. Do we want to allow the piecing together of clusters (sort of like >> Heat's "adopt")? >> >> These are some questions floating around my head and I'm sure there are >> plenty more. Any thoughts on any of this? >> >> Thanks, >> Mat >> >> >> >>_________________________________________________________________________ >>_ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev