On Fri, 2012-09-07 at 08:31 -0400, Hugh Brock wrote:
> On Thu, Sep 06, 2012 at 09:47:40PM -0700, Ian Main wrote:
> > On Thu, Sep 06, 2012 at 03:56:44PM -0400, Scott Seago wrote:
> > > On 09/04/2012 05:25 PM, Ian Main wrote:
> > > >On Mon, Sep 03, 2012 at 12:44:05PM +0200, Jan Provazník wrote:
> > > >>On 08/30/2012 10:23 PM, Ian Main wrote:
> > > >>>On Thu, Aug 30, 2012 at 09:21:37AM +0200, Jan Provaznik wrote:
> > > >>>>On 08/29/2012 09:55 PM, Ian Main wrote:
> > > >>>>>On Mon, Aug 27, 2012 at 02:47:42PM +0200, Jan Provaznik wrote:
> > > >>>>>>On 08/21/2012 06:15 PM, Tomas Sedovic wrote:
> > > >>>>>>>Hey Folks,
> > > >>>>>>>
> > > >>>>>[snip]
> > > >>>>>
> > > >>>>>>>### Querying Heat data from Conductor ###
> > > >>>>>>>
> > > >>>>>>>Heat doesn't support any callbacks. When Conductor wants to know
> > > >>>>>>>details
> > > >>>>>>>about the stack it launched, it will use the CloudFormation API to
> > > >>>>>>>query
> > > >>>>>>>the data.
> > > >>>>>>>
> > > >>>>>>>For the proof of concept stage, we will just issue the query to
> > > >>>>>>>Heat
> > > >>>>>>>upon every relevant UI action: e.g. `ListStacks` when showing
> > > >>>>>>>deployables in the UI, `DescribeStackResource` when shoving a
> > > >>>>>>>details of
> > > >>>>>>>a single deployable, `DescribeStackEvents` to get deployable
> > > >>>>>>>events, etc.
> > > >>>>>>>
> > > >>>>>>This is OK for POC, but it would be really nice to have callback
> > > >>>>>>support for real integration.
> > > >>>>>>
> > > >>>>>>nit: you probably meant 'deployment' instead of 'deployable' in the
> > > >>>>>>paragraph above.
> > > >>>>>I am curious as to why you think it is necessary to use callbacks and
> > > >>>>>mirror the data held in heat within aeolus?
> > > >>>>>
> > > >>>>> Ian
> > > >>>>>
> > > >>>>Conductor needs to know if/when a deployment or single instance
> > > >>>>changed its state (is this what you mean by mirroring data?). W/o
> > > >>>>notification support on Heat side, Conductor would have to poll Heat
> > > >>>>which is painful (dbomatic-like service presence on conductor side)
> > > >>>>and not very effective.
> > > >>>I agree dbomatic type service is error prone. However mirroring data
> > > >>>from one service to another is a very difficult problem to solve well
> > > >>>and have it be reliable.
> > > >>>
> > > >>>Is this required for some sort of reporting? If it is just for the
> > > >>Yes, reporting and keeping history logs about instances is part of
> > > >>Conductor. Conductor also uses this information when choosing a
> > > >>provider when launching an instance and also for quota checking.
> > > >This could be done either way, but really you are just needing a tally
> > > >of instances per user and per cloud. I'm not saying it is ideal but I
> > > >wouldn't say it's impossible or even unwise to consider direct querying
> > > >even here.
> > > One difficulty is that if we're talking about making heat optional,
> > > heat and any other launching infrastructure (including, perhaps, the
> > > current/legacy one if that remains) will need to handle
> > > quota/instance state and queries/etc in the same way. Currently
> > > instance metadata/state/quota checking is tracked in conductor
> > > itself. As long as heat is optional, I'm not sure how we would
> > > change that. If heat became a complete replacement, we _could_ query
> > > all of this stuff live (rather than caching), but there are a lot of
> > > moving parts in the existing infrastructure that would need to be
> > > rewritten. After the pain of handling image metadata as a separate
> > > server (with data only available with live calls outside), we're in
> > > the process of moving that back into conductor. We need to be
> > > careful about deciding to do the opposite with instances and
> > > deployments. I'm not saying we can't/shouldn't do that, but we'd
> > > better make sure we've got answers to the various pain points --
> > > performance, searching, object associations, permissions, etc. For
> > > permissions, in particular, even if all instance/deployment metadata
> > > were in heat (and only queried live from Conductor), we'd at least
> > > need placeholder objects on our side, so that we don't lose the
> > > ability to manage permissions on a per-object basis.
> >
> > Yeah, I'm really just asking people to think about it a bit more and
> > consider what is really involved. Ultimately as in the nature of any
> > open source project we will just have to see how things unfold :).
>
> Something else to think about that has some bearing on this question:
>
> David Lutterkort is working on a Deltacloud instance state tracker that
> would sit in front of Deltacloud and be authoritative for instance
> status, the success or failure of state transitions, etc. (We should
> probably help out with this too.)
"Working on" as in: it's on my todo list - it would help me if somebody
told me loudly when Aeolus would need suc a thing (or tell me as soon as
there is a date for that)
I think the main issue that Heat introduces here is this: with Heat, the
right Architecture is Conductor <-> Heat <-> Deltacloud, which also goes
for state tracking, meaning that DC tracker would notify Heat of
instance state changes, Heat would aggregate these into information
about the corresponding deployment and either react to the changes or
notify Conductor about changes in deployments.
> In the long term, I wonder if it makes sense to move Conductor towards
> an architecture where it always depends on some external service --
> Heat, Deltacloud, maybe even Foreman for a bare-metal cloud -- for state
> information.
I can't say all that much about the best architecture for conductor, but
I think that Conductor should either use these services as mandatory
pieces, or not at all. Making the use of Heat optional will not make
anybody happy.
> You're right (Scott) that given that cloud brokering (multiplexing
> credentials) and self-service placement and so on are always going to
> require Conductor to track permissions, Conductor will always have to
> have some representation of the running instance in the local db for
> permission queries and so on. However I'm not sure that causes the same
> sync issues that Ian is worried about (Ian correct me if I'm wrong).
I'd be still interested in thinking more about a strict IaaS brokering
service as part of DC - IOW, a RESTful service that exposes a DC
frontend, but in the backend multiplexes between different clouds based
on policy etc.
IMHO, conductor right now has three big goals:
* IaaS brokering
* application (deployable) management
* user interface for the above two
Breaking that functionality into separate services would make each of
them more versatile, and help focus development of each of them.
David