On 29/07/13 02:04, Angus Salkeld wrote:
On 26/07/13 09:43 -0700, Clint Byrum wrote:
Excerpts from Zane Bitter's message of 2013-07-26 06:37:09 -0700:
On 25/07/13 19:07, Bartosz Górski wrote:
> We want to start from something simple. At the beginning we are
assuming
> no dependencies between resources from different region. Our first use
> case (the one on the wikipage) uses this assumptions. So this is
why it
> can be easily split on two separate single region templates.
>
> Our goal is to support dependencies between resources from different
> regions. Our second use case (I will add it with more details to the
> wikipage soon) is similar to deploying two instances (app server + db
> server) wordpress in two different regions (app server in the first
> region and db server in the second). Regions will be connected to each
> other via VPN connection . In this case configuration of app server
> depends on db server. We need to know IP address of created DB
server to
> properly configure App server. It forces us to wait with creating app
> server until db server will be created.

That's still a fairly simple case that could be handled by a pair of
OS::Heat::Stack resources (one provides a DBServerIP output it is passed
as a parameter to the other region using {'Fn::GetAtt':
['FirstRegionStack', 'Outputs.DBServerIP']}. But it's possible to
imagine circumstances where that approach is at least suboptimal (e.g.
when creating the actual DB server is comparatively quick, but we have
to wait for the entire template, which might be slow).


How about we add an actual heat resource?

So you could aggregate stacks.

We kinda have one with "OS::Heat::Stack", but it doesn't use

(aside: this doesn't actually exist yet, we only have AWS::CloudFormation::Stack at present.)

python-heatclient. We could solve this by adding an "endpoint"
  property to the "OS::Heat::Stack" resource. Then if it is not
local then it uses python-heatclient to create the nested stack
remotely.

Yes, that's what I was trying (and failing) to suggest.


Just a thought.

-Angus


If you break that stack up into two stacks, db and "other slow stuff"
then you can get the Output of the db stack earlier, so that is a
solvable problem.

+1

> More complicated use case with load balancers and more regions are
also
> in ours minds.

Good to know, thanks. I'll look forward to reading more about it on the
wiki.

What I'd like to avoid is a situation where anything _appears_ to be
possible (Nova server and Cinder volume in different regions? Sure!
Connect 'em together? Sure!), and the user only finds out later that it
doesn't work. It would be much better to structure the templates in such
a way that only things that are legitimate are expressible. That's not
an achievable goal, but IMO we want to be much closer to the latter than
the former.


These are all predictable limitations and can be handled at the parsing
level.  You will know as soon as you have template + params whether or
not that cinder volume in region A can be attached to the nova server
in region B.

That's true, but IMO it's even better if it's obvious at the time you are writing the template. e.g. if (as is currently the case) there is no mechanism within a template to select a region for each resource, then it's obvious you have to write separate templates for each region (and combine them somehow).

I'm still convinced that none of this matters if you rely on a single
Heat
in one of the regions. The whole point of multi region is to eliminate
a SPOF.

So the idea here would be that you spin up a master template in one region, and this would contain OS::Heat::Stack resources that use python-heatclient to connect to Heat APIs in other regions to spin up the constituent stacks in each region. If a region goes down, even if it is the one with your master template, that's no problem because you can still interact with the constituent stacks directly in whatever region(s) you _can_ reach.

So it solves the non-obviousness problem and the single-point-of-failure problem in one fell swoop. The question for me is whether there are legitimate use cases that this would shut out.

cheers,
Zane.

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to