Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse

2013-09-15 Thread Gary Kotton
Hi,
Can you please join us at the up and coming scheduler meeting. That will give 
you a chance to bring up the idea's and discuss them with a larger audience.
https://wiki.openstack.org/wiki/Meetings#Scheduler_Sub-group_meeting
I think that for the summit it would be a good idea if we could also have at 
least one session with the Heat folks to see how we can combine efforts.
Thanks
Gary

From: Mike Spreitzer mailto:mspre...@us.ibm.com>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: Sunday, September 15, 2013 10:19 AM
To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [heat] [scheduler] Bringing things together for 
Icehouse

I've read up on recent goings-on in the scheduler subgroup, and have some 
thoughts to contribute.

But first I must admit that I am still a newbie to OpenStack, and still am 
missing some important clues.  One thing that mystifies me is this: I see 
essentially the same thing, which I have generally taken to calling holistic 
scheduling, discussed in two mostly separate contexts: (1) the (nova) scheduler 
context, and (2) the ambitions for heat.  What am I missing?

I have read the Unified Resource Placement Module document (at 
https://docs.google.com/document/d/1cR3Fw9QPDVnqp4pMSusMwqNuB_6t-t_neFqgXA98-Ls/edit?pli=1#)
 and NovaSchedulerPerspective document (at 
https://docs.google.com/document/d/1_DRv7it_mwalEZzLy5WO92TJcummpmWL4NWsWf0UWiQ/edit?pli=1#heading=h.6ixj0ctv4rwu).
  My group already has running code along these lines, and thoughts for future 
improvements, so I'll mention some salient characteristics.  I have read the 
etherpad at https://etherpad.openstack.org/IceHouse-Nova-Scheduler-Sessions - 
and I hope my remarks will help fit these topics together.

Our current code uses one long-lived process to make placement decisions.  The 
information it needs to do this job is pro-actively maintained in its memory.  
We are planning to try replacing this one process with a set of equivalent 
processes, not sure how well it will work out (we are a research group).

We make a distinction between desired state, target state, and observed state.  
The desired state comes in through REST requests, each giving a full virtual 
resource topology (VRT).  A VRT includes constraints that affect placement, but 
does not include actual placement decisions.  Those are made by what we call 
the placement agent.  Yes, it is separate from orchestration (even in the first 
architecture figure in the u-rpm document the orchestration is separate --- the 
enclosing box does not abate the essential separateness).  In our architecture, 
orchestration is downstream from placement (as in u-rpm).  The placement agent 
produces target state, which is essentially desired state augmented by 
placement decisions.  Observed state is what comes from the lower layers 
(Software Defined Compute, Storage, and Network).  We mainly use OpenStack APIs 
for the lower layers, and have added a few local extensions to make the whole 
story work.

The placement agent judges available capacity by subtracting current 
allocations from raw capacity.  The placement agent maintains in its memory a 
derived thing we call effective state; the allocations in effective state are 
the union of the allocations in target state and the allocations in observed 
state.  Since the orchestration is downstream, some of the planned allocations 
are not in observed state yet.  Since other actors can use the underlying 
cloud, and other weird sh*t happens, not all the allocations are in target 
state.  That's why placement is done against the union of the allocations.  
This is somewhat conservative, but the alternatives are worse.

Note that placement is concerned with allocations rather than current usage.  
Current usage fluctuates much faster than you would want placement to.  
Placement needs to be done with a long-term perspective.  Of course, that 
perspective can be informed by usage information (as well as other sources) --- 
but it remains a distinct thing.

We consider all our copies of observed state to be soft --- they can be lost 
and reconstructed at any time, because the true source is the underlying cloud. 
 Which is not to say that reconstructing a copy is cheap.  We prefer making 
incremental updates as needed, rather than re-reading the whole thing.  One of 
our local extensions adds a mechanism by which a client can register to be 
notified of changes in the Software Defined Compute area.

The target state, on the other hand, is stored authoritatively by the placement 
agent in a database.

We pose placement as a constrained optimization problem, with a non-linear 
objective.  We approximate its solution with a very generic algorithm; it is 
easy to add new kinds of constraints and new contributions to the objective.

The core placement problem is about packing virtual resources into physical 
containers (e.g., V

Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse

2013-09-15 Thread Mike Spreitzer
> From: Gary Kotton 
> ...
> Can you please join us at the up and coming scheduler meeting. That 
> will give you a chance to bring up the idea's and discuss them with 
> a larger audience.

I will do so on Sep 17.  Later meetings still TBD.

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse

2013-09-16 Thread Mike Spreitzer
I have written a brief document, with pictures.  See 
https://docs.google.com/document/d/1hQQGHId-z1A5LOipnBXFhsU3VAMQdSe-UXvL4VPY4ps

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse

2013-09-17 Thread Gary Kotton
Hi,
The document is locked.
Thanks
Gary

From: Mike Spreitzer mailto:mspre...@us.ibm.com>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, September 17, 2013 8:00 AM
To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [heat] [scheduler] Bringing things together for 
Icehouse

I have written a brief document, with pictures.  See 
https://docs.google.com/document/d/1hQQGHId-z1A5LOipnBXFhsU3VAMQdSe-UXvL4VPY4ps

Regards,
Mike
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse

2013-09-17 Thread Mike Spreitzer
Fixed, sorry.




From:   Gary Kotton 
To: OpenStack Development Mailing List 
, 
Date:   09/17/2013 03:26 AM
Subject:Re: [openstack-dev] [heat] [scheduler] Bringing things 
together for Icehouse



Hi,
The document is locked.
Thanks
Gary

From: Mike Spreitzer 
Reply-To: OpenStack Development Mailing List <
openstack-dev@lists.openstack.org>
Date: Tuesday, September 17, 2013 8:00 AM
To: OpenStack Development Mailing List 
Subject: Re: [openstack-dev] [heat] [scheduler] Bringing things together 
for Icehouse

I have written a brief document, with pictures.  See 
https://docs.google.com/document/d/1hQQGHId-z1A5LOipnBXFhsU3VAMQdSe-UXvL4VPY4ps


Regards, 
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse

2013-09-18 Thread Stein, Manuel (Manuel)
Mike,

interesting document.

What would be your approach to regions/zones/ensembles - does holistic mean to 
schedule w.r.t. I-specific constraints across _all_ hosts?

According to your naming and description, on the one hand I understand that the 
infrastructure orchestrator would not do any of the colocate-like 
constraint-evaluation. On the other hand, the holistic scheduler would leave 
some freedom in host selection up to the infras-scheduler, because it should 
try to align real state with the target state through tracking of the observed 
state? Do you split placement across the two (e.g. holistic decides on zone, 
infrastructure decides on host)?

It seems both the holistic scheduler as well as the infrastructure orchestrator 
use the observed state, but wouldn't they consume different information (I mean 
not-overlapping)? What kind of information are shared / both used by the 
scheduler and orchestrator?

As you mention Boris' take on reducing scheduling efficiency, his perspective 
is direct notification and circumvent the DB. Effectively, this would also 
affect the synchronized state among scheduler instances. What's your take on 
this? My humble understanding is that your holistic scheduling design (zoom in 
https://docs.google.com/drawings/d/1o2AcxO-qe2o1CE_g60v769hx9VNUvkUSaHBQITRl_8E/edit?pli=1)
 resembles a little bit the current nova DB approach with one central observed 
state (currently kept in the nova DB) and a synchronized/synthezised effective 
state in the scheduler instance (like the compute_node_get_all() call), 
however, why the separation between the holistic scheduler and infrastructure 
orchestrator? Once the scheduling decision is taken based on the effective 
state, the result could be given to the next level - why the orchestration? 
Multiple regions/service endpoints?

In case the holistic scheduler's "target state" decision that was taken on 
effective-state is a target that can't be achieved by recurring infrastructure 
orchestration: When would you then requeue the I-CFN and re-evaluate the 
holistic scheduler's decision based on an updated effective-state? When do you 
decide the target state can't be met with the decisions of the holistic 
scheduler?

I somehow would expect a "first come first served" policy from a provider. Is 
there some point of serialization of I-CFN deployments through one instance of 
a holistic scheduler or do you plan to have multiple instances of it? When 
parallel holistic schedulers pass decisions to parallel orchestrated 
deployment, the pursuit of a complex application topology/pattern/template's 
target state may be repeatedly interrupted by other decisions/pursuits of 
smaller applications coming in, causing the complex deployment to be delayed. 
Where would you prevent that?

Best, Manuel

PS: though I'm neither developer, sub-group or board member yet, I very much 
welcome the idea of the deployment phases (S-CFN, I-CFN, CFN) and referencing 
levels as we had exactly that approach in a EU research project (IRMOS) 
applying ontologies and interlinking RDF entities. But we had made heavy use of 
asynchronous chained transactions in order to request/reserve/book resources 
which is heavy on the transaction state side and doesn't fit with RESTful 
req/res and eventual consistency. I believe the key observation in your 
suggestion however is IMHO to call for a somewhat clearer separation of 
Software policy, Infrastructure policy and the actually demanded virtual 
infrastructure.


From: Mike Spreitzer [mailto:mspre...@us.ibm.com]
Sent: Dienstag, 17. September 2013 07:00
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [heat] [scheduler] Bringing things together for 
Icehouse

I have written a brief document, with pictures.  See 
https://docs.google.com/document/d/1hQQGHId-z1A5LOipnBXFhsU3VAMQdSe-UXvL4VPY4ps

Regards,
Mike
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse

2013-09-18 Thread Mike Spreitzer
Manuel, and others:
I am sorry, in the rush at the end of the scheduler meeting a critical 
fact flew from my mind: the material I distributed beforehand was intended 
as something I could reference during discussion in the meeting, I did not 
expect it to fully stand on its own.  Indeed, you have noticed that it 
does not.  It will take a little more time to write something that stands 
on its own.  I will try to get something out soon, including answers to 
your questions.

I should also make clear the overall sense of what I am doing.  I am in an 
in-between state.  My group has some running code on which I can report, 
but we are not satisfied with it for a few reasons.  One is that it is not 
integrated yet in any way with Heat, and I think the discussion we are 
having here overlaps with Heat.  Another is that it does not support very 
general changes, we have so far been solving initial deployment issues. We 
have been thinking about how to do better on these issues, and have an 
outline and are proceeding with the work; I can report on these too.  The 
things that concern me the most are issues of how to get architectural 
alignment with what the OpenStack community is doing.  So my main aim 
right now is to have a discussion of how the pieces fit together.  I am 
told that the OpenStack community likes to focus on small incremental 
changes, and that is a way to get things done, but I, at least, would like 
to get some sense of where this is going.

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse

2013-09-20 Thread Mike Spreitzer
I have written a new outline of my thoughts, you can find it at 
https://docs.google.com/document/d/1RV_kN2Io4dotxZREGEks9DM0Ih_trFZ-PipVDdzxq_E

It is intended to stand up better to independent study.  However, it is 
still just an outline.  I am still learning about stuff going on in 
OpenStack, and am learning and thinking faster than I can write.  Trying 
to figure out how to cope.

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse

2013-09-20 Thread Christopher Armstrong
Hi Mike,

I have a *slightly* better idea of the kind of stuff you're talking about,
but I think it would really help if you could include some concrete
real-world use cases and describe why a holistic scheduler inside of Heat
is necessary for solving them.


On Fri, Sep 20, 2013 at 2:13 AM, Mike Spreitzer  wrote:

> I have written a new outline of my thoughts, you can find it at
> https://docs.google.com/document/d/1RV_kN2Io4dotxZREGEks9DM0Ih_trFZ-PipVDdzxq_E
>
> It is intended to stand up better to independent study.  However, it is
> still just an outline.  I am still learning about stuff going on in
> OpenStack, and am learning and thinking faster than I can write.  Trying to
> figure out how to cope.
>
> Regards,
> Mike
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse

2013-09-23 Thread Zane Bitter

On 15/09/13 09:19, Mike Spreitzer wrote:

But first I must admit that I am still a newbie to OpenStack, and still
am missing some important clues.  One thing that mystifies me is this: I
see essentially the same thing, which I have generally taken to calling
holistic scheduling, discussed in two mostly separate contexts: (1) the
(nova) scheduler context, and (2) the ambitions for heat.  What am I
missing?


I think what you're missing is that the only person discussing this in 
the context of Heat is you. Beyond exposing the scheduling parameters in 
other APIs to the user, there's nothing here for Heat to do.


So if you take [heat] out of the subject line then it will be discussed 
in only one context, and you will be mystified no longer. Hope that helps :)


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse

2013-09-23 Thread Keith Bray

I think this picture is relevant to Heat context:
https://docs.google.com/drawings/d/1Y_yyIpql5_cdC8116XrBHzn6GfP_g0NHTTG_W4o
0R9U/edit


As more and more types of compute (containers, VMs, bare metal) and other
resources (geographically dispersed) become available from the cloud with
boarder capabilities (e.g. regionally dispersed backups,
failover/recovery, etc.), the concept of scheduling and optimizing
resource placement becomes more important, particularly when a customer
wants to deploy an application that has multiple underlying resource needs
but doesn't want to know (or care) about specifying the details of those
resources and their placement.

I'm not advocating that this does or does not belongs in Heat (in general
I think Stack resource placement, region, etc., belongs with the template
author or authoring system, and I think physical resource placement
belongs with the underlying service, Nova, Trove, etc.), but I appreciate
Mike including Heat on this. I for one would vote that we consider this
"in-context" for discussion purposes, regardless of action.  Placement
coordination across disparate resource services is likely to become a more
prominent problem, and given Heat has the most holistic view of the
application topology stack within the cloud, Heat may have something to
offer in being a piece of the solution.

Kind regards,
-Keith


On 9/23/13 11:22 AM, "Zane Bitter"  wrote:

>On 15/09/13 09:19, Mike Spreitzer wrote:
>> But first I must admit that I am still a newbie to OpenStack, and still
>> am missing some important clues.  One thing that mystifies me is this: I
>> see essentially the same thing, which I have generally taken to calling
>> holistic scheduling, discussed in two mostly separate contexts: (1) the
>> (nova) scheduler context, and (2) the ambitions for heat.  What am I
>> missing?
>
>I think what you're missing is that the only person discussing this in
>the context of Heat is you. Beyond exposing the scheduling parameters in
>other APIs to the user, there's nothing here for Heat to do.
>
>So if you take [heat] out of the subject line then it will be discussed
>in only one context, and you will be mystified no longer. Hope that helps
>:)
>
>cheers,
>Zane.
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse

2013-09-23 Thread Clint Byrum
Excerpts from Keith Bray's message of 2013-09-23 12:22:16 -0700:
> 
> I think this picture is relevant to Heat context:
> https://docs.google.com/drawings/d/1Y_yyIpql5_cdC8116XrBHzn6GfP_g0NHTTG_W4o
> 0R9U/edit
> 
> 
> As more and more types of compute (containers, VMs, bare metal) and other
> resources (geographically dispersed) become available from the cloud with
> boarder capabilities (e.g. regionally dispersed backups,
> failover/recovery, etc.), the concept of scheduling and optimizing
> resource placement becomes more important, particularly when a customer
> wants to deploy an application that has multiple underlying resource needs
> but doesn't want to know (or care) about specifying the details of those
> resources and their placement.
> 
> I'm not advocating that this does or does not belongs in Heat (in general
> I think Stack resource placement, region, etc., belongs with the template
> author or authoring system, and I think physical resource placement
> belongs with the underlying service, Nova, Trove, etc.), but I appreciate
> Mike including Heat on this. I for one would vote that we consider this
> "in-context" for discussion purposes, regardless of action.  Placement
> coordination across disparate resource services is likely to become a more
> prominent problem, and given Heat has the most holistic view of the
> application topology stack within the cloud, Heat may have something to
> offer in being a piece of the solution.

Just to restate what you and Zane have just said succintly: There is
definitely a need for Heat to be able to communicate to the API's any
placement details that can be communicated. However, Heat should not
actually be "scheduling" anything.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse

2013-09-23 Thread Mike Spreitzer
I was not trying to raise issues of geographic dispersion and other higher 
level structures, I think the issues I am trying to raise are relevant 
even without them.  This is not to deny the importance, or relevance, of 
higher levels of structure.  But I would like to first respond to the 
discussion that I think is relevant even without them.

I think it is valuable for OpenStack to have a place for holistic 
infrastructure scheduling.  I am not the only one to argue for this, but I 
will give some use cases.  Consider Hadoop, which stresses the path 
between Compute and Block Storage.  In the usual way of deploying and 
configuring Hadoop, you want each data node to be using directly attached 
storage.  You could address this by scheduling one of those two services 
first, and then the second with constraints from the first --- but the 
decisions made by the first could paint the second into a corner.  It is 
better to be able to schedule both jointly.  Also consider another 
approach to Hadoop, in which the block storage is provided by a bank of 
storage appliances that is equidistant (in networking terms) from all the 
Compute.  In this case the Storage and Compute scheduling decisions have 
no strong interaction --- but the Compute scheduling can interact with the 
network (you do not want to place Compute in a way that overloads part of 
the network).

Once a holistic infrastructure scheduler has made its decisions, there is 
then a need for infrastructure orchestration.  The infrastructure 
orchestration function is logically downstream from holistic scheduling. I 
do not favor creating a new and alternate way of doing infrastructure 
orchestration in this position.  Rather I think it makes sense to use 
essentially today's heat engine.

Today Heat is the only thing that takes a holistic view of 
patterns/topologies/templates, and there are various pressures to expand 
the mission of Heat.  A marquee expansion is to take on software 
orchestration.  I think holistic infrastructure scheduling should be 
downstream from the preparatory stage of software orchestration (the other 
stage of software orchestration is the run-time action in and supporting 
the resources themselves).  There are other pressures to expand the 
mission of Heat too.  This leads to conflicting usages for the word 
"heat": it can mean the infrastructure orchestration function that is the 
main job of today's heat engine, or it can mean the full expanded mission 
(whatever you think that should be).  I have been mainly using "heat" in 
that latter sense, but I do not really want to argue over naming of bits 
and assemblies of functionality.  Call them whatever you want.  I am more 
interested in getting a useful arrangement of functionality.  I have 
updated my picture at 
https://docs.google.com/drawings/d/1Y_yyIpql5_cdC8116XrBHzn6GfP_g0NHTTG_W4o0R9U 
--- do you agree that the arrangement of functionality makes sense?

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse

2013-09-23 Thread Mike Spreitzer
Someone earlier asked for greater clarity about infrastructure 
orchestration, so here is my view.  I see two main issues: (1) deciding 
the order in which to do things, and (2) doing them in an acceptable 
order.  That's an oversimplified wording because, in general, some 
parallelism is possible.  In general, the set of things to do is 
constrained by a partial order --- and that partial order comes from two 
sources.  One is the nature of the downstream APIs.  For examples, you can 
not attach a volume or floating IP address to a VM until after both have 
been created.  The other source of ordering constraints is upstream 
decision makers.  Decisions made upstream are conveyed into today's heat 
engine by data dependencies between resources in a heat template.  The 
heat engine is not making those decisions.  It is not a source of 
important ordering constraints.  When the ordering constraints actually 
allow some parallelism --- they do not specify a total order --- the heat 
engine has freedom in which of that parallelism to exploit vs flatten into 
sequential ordering.  What today's heat engine does is make its available 
choices about that and issue the operations, keeping track of IDs and 
outcomes.  I have been using the term "infrastructure orchestration" to 
refer to this latter job (issuing infrastructure operations with 
acceptable ordering/parallelism), not the decision-making of upstream 
agents.  This might be confusing; I think the plain English meaning of 
"orchestration" suggests decision-making as well as execution.

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse

2013-09-23 Thread Clint Byrum
Excerpts from Mike Spreitzer's message of 2013-09-23 20:31:32 -0700:
> I was not trying to raise issues of geographic dispersion and other higher 
> level structures, I think the issues I am trying to raise are relevant 
> even without them.  This is not to deny the importance, or relevance, of 
> higher levels of structure.  But I would like to first respond to the 
> discussion that I think is relevant even without them.
> 
> I think it is valuable for OpenStack to have a place for holistic 
> infrastructure scheduling.  I am not the only one to argue for this, but I 
> will give some use cases.  Consider Hadoop, which stresses the path 
> between Compute and Block Storage.  In the usual way of deploying and 
> configuring Hadoop, you want each data node to be using directly attached 
> storage.  You could address this by scheduling one of those two services 
> first, and then the second with constraints from the first --- but the 
> decisions made by the first could paint the second into a corner.  It is 
> better to be able to schedule both jointly.  Also consider another 
> approach to Hadoop, in which the block storage is provided by a bank of 
> storage appliances that is equidistant (in networking terms) from all the 
> Compute.  In this case the Storage and Compute scheduling decisions have 
> no strong interaction --- but the Compute scheduling can interact with the 
> network (you do not want to place Compute in a way that overloads part of 
> the network).
> 
> Once a holistic infrastructure scheduler has made its decisions, there is 
> then a need for infrastructure orchestration.  The infrastructure 
> orchestration function is logically downstream from holistic scheduling. I 
> do not favor creating a new and alternate way of doing infrastructure 
> orchestration in this position.  Rather I think it makes sense to use 
> essentially today's heat engine.
> 

Ok, now I think I understand you.

What you're talking about, in many ways, is very similar to what the
autoscale-interested folk have been proposing. Something that sits
outside of Heat and makes use of other information (alarms/policy/etc)
to tweak a Heat stack.

Only this service would make use of information before a stack was
even created.

I like it, and I do think that it should be "part of heat" because it will
be making use of Heat's templating to make those decisions. However,
I also think it should be a separate repository/project within the
"OpenStack Orchestration" program, to keep it honest with regard to
interfaces. Heat's infrastructure-focused service is already big enough,
we don't need to grow it even more with only slightly-related code.

Also I imagine there are many ways to skin this cat, and thus we may see
alternative holistic schedulers for specific applications (Savannah may
use a hadoop specific approach, as you suggested). There is also the
possibility of chaining schedulers.

The Tuskar project also comes to mind, as the deployment of baremetal with
a mind toward network topology and physical placement (racks/rooms/etc)
for the explicit purpose of deploying OpenStack is itself a form of
holistic scheduling.

> Today Heat is the only thing that takes a holistic view of 
> patterns/topologies/templates, and there are various pressures to expand 
> the mission of Heat.  A marquee expansion is to take on software 
> orchestration.  I think holistic infrastructure scheduling should be 
> downstream from the preparatory stage of software orchestration (the other 
> stage of software orchestration is the run-time action in and supporting 
> the resources themselves).  There are other pressures to expand the 
> mission of Heat too.  This leads to conflicting usages for the word 
> "heat": it can mean the infrastructure orchestration function that is the 
> main job of today's heat engine, or it can mean the full expanded mission 
> (whatever you think that should be).  I have been mainly using "heat" in 
> that latter sense, but I do not really want to argue over naming of bits 
> and assemblies of functionality.  Call them whatever you want.  I am more 
> interested in getting a useful arrangement of functionality.  I have 
> updated my picture at 
> https://docs.google.com/drawings/d/1Y_yyIpql5_cdC8116XrBHzn6GfP_g0NHTTG_W4o0R9U
>  
> --- do you agree that the arrangement of functionality makes sense?

I do now. I stand by the original position that Heat, as it exists today,
would simply pass along infrastructure scheduling decisions made by a
holistic scheduler. However I think it would be unwise to try and develop
these things apart from one another as it may encourage fracturing the
template language. So I would propose that if there is a more general
purpose attempt at holistic scheduling via Heat templates that it be
done as a separate service/repository within the Heat program.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.or

Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse

2013-09-24 Thread Zane Bitter

On 24/09/13 05:31, Mike Spreitzer wrote:

I was not trying to raise issues of geographic dispersion and other
higher level structures, I think the issues I am trying to raise are
relevant even without them.  This is not to deny the importance, or
relevance, of higher levels of structure.  But I would like to first
respond to the discussion that I think is relevant even without them.

I think it is valuable for OpenStack to have a place for holistic
infrastructure scheduling.  I am not the only one to argue for this, but
I will give some use cases.  Consider Hadoop, which stresses the path
between Compute and Block Storage.  In the usual way of deploying and
configuring Hadoop, you want each data node to be using directly
attached storage.  You could address this by scheduling one of those two
services first, and then the second with constraints from the first ---
but the decisions made by the first could paint the second into a
corner.  It is better to be able to schedule both jointly.  Also
consider another approach to Hadoop, in which the block storage is
provided by a bank of storage appliances that is equidistant (in
networking terms) from all the Compute.  In this case the Storage and
Compute scheduling decisions have no strong interaction --- but the
Compute scheduling can interact with the network (you do not want to
place Compute in a way that overloads part of the network).


Thanks for writing this up, it's very helpful for figuring out what you 
mean by a 'holistic' scheduler.


I don't yet see how this could be considered in-scope for the 
Orchestration program, which uses only the public APIs of other services.


To take the first example, wouldn't your holistic scheduler effectively 
have to reserve a compute instance and some directly attached block 
storage prior to actually creating them? Have you considered Climate 
rather than Heat as an integration point?



Once a holistic infrastructure scheduler has made its decisions, there
is then a need for infrastructure orchestration.  The infrastructure
orchestration function is logically downstream from holistic scheduling.


I agree that it's necessarily 'downstream' (in the sense of happening 
afterwards). I'd hesitate to use the word 'logically', since I think by 
it's very nature a holistic scheduler introduces dependencies between 
services that were intended to be _logically_ independent.



  I do not favor creating a new and alternate way of doing
infrastructure orchestration in this position.  Rather I think it makes
sense to use essentially today's heat engine.

Today Heat is the only thing that takes a holistic view of
patterns/topologies/templates, and there are various pressures to expand
the mission of Heat.  A marquee expansion is to take on software
orchestration.  I think holistic infrastructure scheduling should be
downstream from the preparatory stage of software orchestration (the
other stage of software orchestration is the run-time action in and
supporting the resources themselves).  There are other pressures to
expand the mission of Heat too.  This leads to conflicting usages for
the word "heat": it can mean the infrastructure orchestration function
that is the main job of today's heat engine, or it can mean the full
expanded mission (whatever you think that should be).  I have been
mainly using "heat" in that latter sense, but I do not really want to
argue over naming of bits and assemblies of functionality.  Call them
whatever you want.  I am more interested in getting a useful arrangement
of functionality.  I have updated my picture at
https://docs.google.com/drawings/d/1Y_yyIpql5_cdC8116XrBHzn6GfP_g0NHTTG_W4o0R9U---
do you agree that the arrangement of functionality makes sense?


Candidly, no.

As proposed, the software configs contain directives like 'hosted_on: 
server_name'. (I don't know that I'm a huge fan of this design, but I 
don't think the exact details are relevant in this context.) There's no 
non-trivial processing in the preparatory stage of software 
orchestration that would require it to be performed before scheduling 
could occur.


Let's make sure we distinguish between doing holistic scheduling, which 
requires a priori knowledge of the resources to be created, and 
automatic scheduling, which requires psychic knowledge of the user's 
mind. (Did the user want to optimise for performance or availability? 
How would you infer that from the template?) There's nothing that 
happens while preparing the software configurations that's necessary for 
the former nor sufficient for the latter.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse

2013-09-24 Thread Debojyoti Dutta
Joining the party late :)

I think there have been a lot of interesting ideas around wholistic
scheduling over the last few summits. However seems there is no clear
agreement on 1) where it should be specified and implemented 2) what
the specifications look like - VRT, policies, templates etc etc etc 3)
how to scale such implementations given the complexity.

In order to tackle the seemingly complex problem, why dont we get
together during the summit (and do homework beforehand) and converge
upon the above independent of where we implement. Maybe we implement
it in a separate scheduling layer which is independent of existing
services so that it touches less things.

Mike: agree that we should be specifying a VRT, constraints etc and
matching them via a constrained optimization ... all this is music to
my ears. However I feel we will just make it harder to get it done if
we dont simplify the problem without giving up room for where we want
to be. At the end of the day, in order to place resources efficiently
you need a demand vector/matrix etc and you match with the available
resource vector/matrix  so why not have an abstract resource
placement layer that hides details except for the quantities that are
really needed. That way it can be used inside Heat or it can be used
standalone 

To that effect - if you look at the BP
https://blueprints.launchpad.net/nova/+spec/solver-scheduler, and the
associated code (WIP), its an attempt to do the same within the
current Nova framework. One thing we could do is to have a layer that
abstracts the VRT and constraints so that a simple optimization
framework could then make the decisions instead of hand crafted
algorithms that are harder to extend (e.g. the entire suite of
scheduler filters that exist today).

debo

On Tue, Sep 24, 2013 at 7:01 AM, Zane Bitter  wrote:
> On 24/09/13 05:31, Mike Spreitzer wrote:
>>
>> I was not trying to raise issues of geographic dispersion and other
>> higher level structures, I think the issues I am trying to raise are
>> relevant even without them.  This is not to deny the importance, or
>> relevance, of higher levels of structure.  But I would like to first
>> respond to the discussion that I think is relevant even without them.
>>
>> I think it is valuable for OpenStack to have a place for holistic
>> infrastructure scheduling.  I am not the only one to argue for this, but
>> I will give some use cases.  Consider Hadoop, which stresses the path
>> between Compute and Block Storage.  In the usual way of deploying and
>> configuring Hadoop, you want each data node to be using directly
>> attached storage.  You could address this by scheduling one of those two
>> services first, and then the second with constraints from the first ---
>> but the decisions made by the first could paint the second into a
>> corner.  It is better to be able to schedule both jointly.  Also
>> consider another approach to Hadoop, in which the block storage is
>> provided by a bank of storage appliances that is equidistant (in
>> networking terms) from all the Compute.  In this case the Storage and
>> Compute scheduling decisions have no strong interaction --- but the
>> Compute scheduling can interact with the network (you do not want to
>> place Compute in a way that overloads part of the network).
>
>
> Thanks for writing this up, it's very helpful for figuring out what you mean
> by a 'holistic' scheduler.
>
> I don't yet see how this could be considered in-scope for the Orchestration
> program, which uses only the public APIs of other services.
>
> To take the first example, wouldn't your holistic scheduler effectively have
> to reserve a compute instance and some directly attached block storage prior
> to actually creating them? Have you considered Climate rather than Heat as
> an integration point?
>
>
>> Once a holistic infrastructure scheduler has made its decisions, there
>> is then a need for infrastructure orchestration.  The infrastructure
>> orchestration function is logically downstream from holistic scheduling.
>
>
> I agree that it's necessarily 'downstream' (in the sense of happening
> afterwards). I'd hesitate to use the word 'logically', since I think by it's
> very nature a holistic scheduler introduces dependencies between services
> that were intended to be _logically_ independent.
>
>
>>   I do not favor creating a new and alternate way of doing
>> infrastructure orchestration in this position.  Rather I think it makes
>> sense to use essentially today's heat engine.
>>
>> Today Heat is the only thing that takes a holistic view of
>> patterns/topologies/templates, and there are various pressures to expand
>> the mission of Heat.  A marquee expansion is to take on software
>> orchestration.  I think holistic infrastructure scheduling should be
>> downstream from the preparatory stage of software orchestration (the
>> other stage of software orchestration is the run-time action in and
>> supporting the resources themselves).  The

Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse

2013-09-25 Thread Mike Spreitzer
Debo, Yathi: I have read 
https://docs.google.com/document/d/1IiPI0sfaWb1bdYiMWzAAx0HYR6UqzOan_Utgml5W1HI/edit?pli=1
 
and most of the referenced materials, and I have a couple of big-picture 
questions.  That document talks about making Nova call out to something 
that makes the sort of smart decisions you and I favor.  As far as I know, 
Nova is still scheduling one thing at a time.  How does that smart 
decision maker get a look at the whole pattern/termplate/topology as soon 
as it is needed?  I think you intend the smart guy gets it first, before 
Nova starts getting individual VM calls, right?  How does this picture 
grow to the point where the smart guy is making joint decisions about 
compute, storage, and network?  I think the key idea has to be that the 
smart guy gets a look at the whole problem first, and makes its decisions, 
before any individual resources are requested from 
nova/cinder/neutron/etc.  I think your point about "non-disruptive, works 
with the current nova architecture" is about solving the problem of how 
the smart guy's decisions get into nova.  Presumably this problem will 
occur for cinder and so on, too.  Have I got this right?

There is another way, right?  Today Nova accepts an 'availability zone' 
argument whose value can specify a particular host.  I am not sure about 
Cinder, but you can abuse volume types to get this job done.

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse

2013-09-27 Thread Debojyoti Dutta
Hi Mike

We understand your point, and, academically, we agree to a large
extent. We aware of the optimization results for holistic placement
etc (and how any realistic formulation is hard). When we look at the
scheduler today, it does one thing at a time because it was meant to
be pragmatic when it was built.

Where we want to go is to move gradually to a point where some entity
knows 1) what to place where and 2) what order to place holistically
so we are saying the same thing!

Now how do we go there 'gradually'? A few of us decided that we can
1st do scheduling for a group of instance and then gradually extend it
to the kind of things you are mentioning.

My $0.02: A possible next step is to 1st tackle the problem of
intelligent placement as in 1) and leave 2) for later or the heat guys
since 1) is independent of 1). Since Openstack is a framework, it
might be useful to define a simple extensible API (least common
denominator) since everyone will have their opinion on how to do
define this (and extend it). We can use the scheduler subgroup to do
this.

We would ideally like to define the API that is cross services
compliant but works with the current Nova so that we can incrementally
build this out. In the final version, the API needs to specify
compute, network, storage, policies etc. I think templates might be
more heat or heat-like service specific.

Debo


On Wed, Sep 25, 2013 at 12:57 PM, Mike Spreitzer  wrote:
> Debo, Yathi: I have read
> https://docs.google.com/document/d/1IiPI0sfaWb1bdYiMWzAAx0HYR6UqzOan_Utgml5W1HI/edit?pli=1
> and most of the referenced materials, and I have a couple of big-picture
> questions.  That document talks about making Nova call out to something that
> makes the sort of smart decisions you and I favor.  As far as I know, Nova
> is still scheduling one thing at a time.  How does that smart decision maker
> get a look at the whole pattern/termplate/topology as soon as it is needed?
> I think you intend the smart guy gets it first, before Nova starts getting
> individual VM calls, right?  How does this picture grow to the point where
> the smart guy is making joint decisions about compute, storage, and network?
> I think the key idea has to be that the smart guy gets a look at the whole
> problem first, and makes its decisions, before any individual resources are
> requested from nova/cinder/neutron/etc.  I think your point about
> "non-disruptive, works with the current nova architecture" is about solving
> the problem of how the smart guy's decisions get into nova.  Presumably this
> problem will occur for cinder and so on, too.  Have I got this right?
>
> There is another way, right?  Today Nova accepts an 'availability zone'
> argument whose value can specify a particular host.  I am not sure about
> Cinder, but you can abuse volume types to get this job done.
>
> Thanks,
> Mike
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
-Debo~

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse (now featuring software orchestration)

2013-09-24 Thread Mike Spreitzer
Let me elaborate a little on my thoughts about software orchestration, and 
respond to the recent mails from Zane and Debo.  I have expanded my 
picture at 
https://docs.google.com/drawings/d/1Y_yyIpql5_cdC8116XrBHzn6GfP_g0NHTTG_W4o0R9U 
and added a companion picture at 
https://docs.google.com/drawings/d/1TCfNwzH_NBnx3bNz-GQQ1bRVgBpJdstpu0lH_TONw6g 
that shows an alternative.

One of the things I see going on is discussion about better techniques for 
software orchestration than are supported in plain CFN.  Plain CFN allows 
any script you want in userdata, and prescription of certain additional 
setup elsewhere in cfn metadata.  But it is all mixed together and very 
concrete.  I think many contributors would like to see something with more 
abstraction boundaries, not only within one template but also the ability 
to have modular sources.

I work closely with some colleagues who have a particular software 
orchestration technology they call Weaver.  It takes as input for one 
deployment not a single monolithic template but rather a collection of 
modules.  Like higher level constructs in programming languages, these 
have some independence and can be re-used in various combinations and 
ways.  Weaver has a compiler that weaves together the given modules to 
form a monolithic model.  In fact, the input is a modular Ruby program, 
and the Weaver compiler is essentially running that Ruby program; this 
program produces the monolithic model as a side effect.  Ruby is a pretty 
good language in which to embed a domain-specific language, and my 
colleagues have done this.  The modular Weaver input mostly looks 
declarative, but you can use Ruby to reduce the verboseness of, e.g., 
repetitive stuff --- as well as plain old modularity with abstraction.  We 
think the modular Weaver input is much more compact and better for human 
reading and writing than plain old CFN.  This might not be obvious when 
you are doing the "hello world" example, but when you get to realistic 
examples it becomes clear.

The Weaver input discusses infrastructure issues, in the rich way Debo and 
I have been advocating, as well as software.  For this reason I describe 
it as an integrated model (integrating software and infrastructure 
issues).  I hope for HOT to evolve to be similarly expressive to the 
monolithic integrated model produced by the Weaver compiler.

In Weaver, as well as in some of the other software orchestration 
technologies being discussed, there is a need for some preparatory work 
before the infrastructure (e.g., VMs) is created.  This preparatory stage 
begins the implementation of the software orchestration abstractions. Here 
is the translation from something more abstract into flat userdata and 
other cfn metadata.  For Weaver, this stage also involves some 
stack-specific setup in a distinct coordination service.  When the VMs 
finally run their userdata, the Weaver-generated scripts there use that 
pre-configured part of the coordination service to interact properly with 
each other.

I think that, to a first-order approximation, the software orchestration 
preparatory stage commutes with holistic infrastructure scheduling.  They 
address independent issues, and can be done in either order.  That is why 
I have added a companion picture; the two pictures show the two orders.

My claim of commutativity is limited, as I and colleagues have 
demonstrated only one of the two orderings; the other is just a matter of 
recent thought.  There could be gotchas lurking in there.

Between the two orderings, I have a preference for the one I first 
mentioned and have experience with actually running.  It has the virtue of 
keeping related things closer together: the software orchestration 
compiler is next to the software orchestration preparatory stage, and the 
holistic infrastructure scheduling is next to the infrastructure 
orchestration.

In response to Debo's remark about flexibility: I am happy to see an 
architecture that allows either ordering if it turns out that they are 
both viable and the community really wants that flexibility.  I am not so 
sure we can totally give up on architecting where things go, but this 
level of flexibility I can understand and get behind (provided it works).

Just as a LP solver is a general utility whose uses do not require 
architecting, I can imagine a higher level utility that solves abstract 
placement problems.  Actually, this is not a matter of imagination.  My 
group has been evolving such a thing for years.  It is now based, as Debo 
recommends, on a very flexible and general optimization algorithm.  But 
the plumbing between it and the rest of the system is significant; I would 
not expect many users to take on that magnitude of task.

I do not really want to get into dogmatic fights over what gets labelled 
"heat".  I will leave the questions about which piece goes where in the 
OpenStack programs and projects to those more informed and anointed.  What 
I am trying t

Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse (now featuring software orchestration)

2013-09-24 Thread Clint Byrum
Excerpts from Mike Spreitzer's message of 2013-09-24 22:03:21 -0700:
> Let me elaborate a little on my thoughts about software orchestration, and 
> respond to the recent mails from Zane and Debo.  I have expanded my 
> picture at 
> https://docs.google.com/drawings/d/1Y_yyIpql5_cdC8116XrBHzn6GfP_g0NHTTG_W4o0R9U
>  
> and added a companion picture at 
> https://docs.google.com/drawings/d/1TCfNwzH_NBnx3bNz-GQQ1bRVgBpJdstpu0lH_TONw6g
>  
> that shows an alternative.
> 
> One of the things I see going on is discussion about better techniques for 
> software orchestration than are supported in plain CFN.  Plain CFN allows 
> any script you want in userdata, and prescription of certain additional 
> setup elsewhere in cfn metadata.  But it is all mixed together and very 
> concrete.  I think many contributors would like to see something with more 
> abstraction boundaries, not only within one template but also the ability 
> to have modular sources.
> 

Yes please. Orchestrate things, don't configure them. That is what
configuration tools are for.

There is a third stealth-objective that CFN has caused to linger in
Heat. That is "packaging cloud applications". By allowing the 100%
concrete CFN template to stand alone, users can "ship" the template.

IMO this marrying of software assembly, config, and orchestration is a
concern unto itself, and best left outside of the core infrastructure
orchestration system.

> I work closely with some colleagues who have a particular software 
> orchestration technology they call Weaver.  It takes as input for one 
> deployment not a single monolithic template but rather a collection of 
> modules.  Like higher level constructs in programming languages, these 
> have some independence and can be re-used in various combinations and 
> ways.  Weaver has a compiler that weaves together the given modules to 
> form a monolithic model.  In fact, the input is a modular Ruby program, 
> and the Weaver compiler is essentially running that Ruby program; this 
> program produces the monolithic model as a side effect.  Ruby is a pretty 
> good language in which to embed a domain-specific language, and my 
> colleagues have done this.  The modular Weaver input mostly looks 
> declarative, but you can use Ruby to reduce the verboseness of, e.g., 
> repetitive stuff --- as well as plain old modularity with abstraction.  We 
> think the modular Weaver input is much more compact and better for human 
> reading and writing than plain old CFN.  This might not be obvious when 
> you are doing the "hello world" example, but when you get to realistic 
> examples it becomes clear.
> 
> The Weaver input discusses infrastructure issues, in the rich way Debo and 
> I have been advocating, as well as software.  For this reason I describe 
> it as an integrated model (integrating software and infrastructure 
> issues).  I hope for HOT to evolve to be similarly expressive to the 
> monolithic integrated model produced by the Weaver compiler.
> 

Indeed, we're dealing with this very problem in TripleO right now. We need
to be able to compose templates that vary slightly for various reasons.

A ruby DSL is not something I think is ever going to happen in
OpenStack. But python has its advantages for DSL as well. I have been
trying to use clever tricks in yaml for a while, but perhaps we should
just move to a client-side python DSL that pushes the compiled yaml/json
templates into the engine.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse (now featuring software orchestration)

2013-09-25 Thread Thomas Spatzier
Clint Byrum  wrote on 25.09.2013 08:46:57:
> From: Clint Byrum 
> To: openstack-dev ,
> Date: 25.09.2013 08:48
> Subject: Re: [openstack-dev] [heat] [scheduler] Bringing things
> together for Icehouse (now featuring software orchestration)
>
> Excerpts from Mike Spreitzer's message of 2013-09-24 22:03:21 -0700:
> > Let me elaborate a little on my thoughts about software orchestration,
and
> > respond to the recent mails from Zane and Debo.  I have expanded my
> > picture at
> > https://docs.google.com/drawings/d/
> 1Y_yyIpql5_cdC8116XrBHzn6GfP_g0NHTTG_W4o0R9U
> > and added a companion picture at
> > https://docs.google.com/drawings/d/1TCfNwzH_NBnx3bNz-
> GQQ1bRVgBpJdstpu0lH_TONw6g
> > that shows an alternative.
> >
> > One of the things I see going on is discussion about better techniques
for
> > software orchestration than are supported in plain CFN.  Plain CFN
allows
> > any script you want in userdata, and prescription of certain additional

> > setup elsewhere in cfn metadata.  But it is all mixed together and very

> > concrete.  I think many contributors would like to see something with
more
> > abstraction boundaries, not only within one template but also the
ability
> > to have modular sources.
> >
>
> Yes please. Orchestrate things, don't configure them. That is what
> configuration tools are for.
>
> There is a third stealth-objective that CFN has caused to linger in
> Heat. That is "packaging cloud applications". By allowing the 100%
> concrete CFN template to stand alone, users can "ship" the template.
>
> IMO this marrying of software assembly, config, and orchestration is a
> concern unto itself, and best left outside of the core infrastructure
> orchestration system.
>
> > I work closely with some colleagues who have a particular software
> > orchestration technology they call Weaver.  It takes as input for one
> > deployment not a single monolithic template but rather a collection of
> > modules.  Like higher level constructs in programming languages, these
> > have some independence and can be re-used in various combinations and
> > ways.  Weaver has a compiler that weaves together the given modules to
> > form a monolithic model.  In fact, the input is a modular Ruby program,

> > and the Weaver compiler is essentially running that Ruby program; this
> > program produces the monolithic model as a side effect.  Ruby is a
pretty
> > good language in which to embed a domain-specific language, and my
> > colleagues have done this.  The modular Weaver input mostly looks
> > declarative, but you can use Ruby to reduce the verboseness of, e.g.,
> > repetitive stuff --- as well as plain old modularity with abstraction.
We
> > think the modular Weaver input is much more compact and better for
human
> > reading and writing than plain old CFN.  This might not be obvious when

> > you are doing the "hello world" example, but when you get to realistic
> > examples it becomes clear.
> >
> > The Weaver input discusses infrastructure issues, in the rich way Debo
and
> > I have been advocating, as well as software.  For this reason I
describe
> > it as an integrated model (integrating software and infrastructure
> > issues).  I hope for HOT to evolve to be similarly expressive to the
> > monolithic integrated model produced by the Weaver compiler.

I don't fully get this idea of HOT consuming a monolithic model produced by
some compiler - be it Weaver or anything else.
I thought the goal was to develop HOT in a way that users can actually
write HOT, as opposed to having to use some "compiler" to produce some
useful model.
So wouldn't it make sense to make sure we add the right concepts to HOT to
make sure we are able to express what we want to express and have things
like composability, re-use, substitutability?

> >
>
> Indeed, we're dealing with this very problem in TripleO right now. We
need
> to be able to compose templates that vary slightly for various reasons.
>
> A ruby DSL is not something I think is ever going to happen in
> OpenStack. But python has its advantages for DSL as well. I have been
> trying to use clever tricks in yaml for a while, but perhaps we should
> just move to a client-side python DSL that pushes the compiled yaml/json
> templates into the engine.

As said in my comment above, I would like to see us focusing on the
agreement of one language - HOT - instead of yet another DSL.
There are things out there that are well established (like chef or puppet),
and HOT should be able to efficiently and intuitively use those things and
orchestrate components built using those things.

Anywa

Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse (now featuring software orchestration)

2013-09-25 Thread Mike Spreitzer
Clint wrote:

> There is a third stealth-objective that CFN has caused to linger in
> Heat. That is "packaging cloud applications". By allowing the 100%
> concrete CFN template to stand alone, users can "ship" the template.
> 
> IMO this marrying of software assembly, config, and orchestration is a
> concern unto itself, and best left outside of the core infrastructure
> orchestration system.

I favor separation of concerns.  I do not follow what you are suggesting 
about how to separate these particular concerns.  Can you elaborate?

Clint also wrote:

> A ruby DSL is not something I think is ever going to happen in
> OpenStack.

Ruby is particularly good when the runtime scripting is done through chef 
or puppet, which are based on Ruby.  For example, Weaver supports chef 
based scripting, and integrates in a convenient way.

A distributed system does not all have to be written in the same language.

Thomas wrote:

> I don't fully get this idea of HOT consuming a monolithic model produced 
by
> some compiler - be it Weaver or anything else.
> I thought the goal was to develop HOT in a way that users can actually
> write HOT, as opposed to having to use some "compiler" to produce some
> useful model.
> So wouldn't it make sense to make sure we add the right concepts to HOT 
to
> make sure we are able to express what we want to express and have things
> like composability, re-use, substitutability?

I am generally suspicious of analogies, but let me offer one here.  In the 
realm of programming languages, many have great features for modularity 
within one source file.  These features are greatly appreciated and used. 
But that does not stop people from wanting to maintain sources factored 
into multiple files.

Back to the world at hand, I do not see a conflict between (1) making a 
language for monoliths with sophisticated internal structure and (2) 
defining one or more languages for non-monolithic sources.

Thomas wrote:
> As said in my comment above, I would like to see us focusing on the
> agreement of one language - HOT - instead of yet another DSL.
> There are things out there that are well established (like chef or 
puppet),
> and HOT should be able to efficiently and intuitively use those things 
and
> orchestrate components built using those things.

Yes, it may be that our best tactic at this point is to allow multiple 
(2), some or all not defined through the OpenStack Foundation, while 
agreeing here on (1).

Thomas wrote:
> Anyway, this might be off the track that was originally discussed in 
this
> thread (i.e. holistic scheduling and so on) ...

We are engaged in a boundary-drawing and relationship-drawing exercise.  I 
brought up this idea of a software orchestration compiler to show why I 
think the software orchestration preparation stage is best done earlier 
rather than later.

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse (now featuring software orchestration)

2013-09-25 Thread Clint Byrum
Excerpts from Thomas Spatzier's message of 2013-09-25 00:59:44 -0700:
> Clint Byrum  wrote on 25.09.2013 08:46:57:
> > From: Clint Byrum 
> > To: openstack-dev ,
> > Date: 25.09.2013 08:48
> > Subject: Re: [openstack-dev] [heat] [scheduler] Bringing things
> > together for Icehouse (now featuring software orchestration)
> >
> > Excerpts from Mike Spreitzer's message of 2013-09-24 22:03:21 -0700:
> > > Let me elaborate a little on my thoughts about software orchestration,
> and
> > > respond to the recent mails from Zane and Debo.  I have expanded my
> > > picture at
> > > https://docs.google.com/drawings/d/
> > 1Y_yyIpql5_cdC8116XrBHzn6GfP_g0NHTTG_W4o0R9U
> > > and added a companion picture at
> > > https://docs.google.com/drawings/d/1TCfNwzH_NBnx3bNz-
> > GQQ1bRVgBpJdstpu0lH_TONw6g
> > > that shows an alternative.
> > >
> > > One of the things I see going on is discussion about better techniques
> for
> > > software orchestration than are supported in plain CFN.  Plain CFN
> allows
> > > any script you want in userdata, and prescription of certain additional
> 
> > > setup elsewhere in cfn metadata.  But it is all mixed together and very
> 
> > > concrete.  I think many contributors would like to see something with
> more
> > > abstraction boundaries, not only within one template but also the
> ability
> > > to have modular sources.
> > >
> >
> > Yes please. Orchestrate things, don't configure them. That is what
> > configuration tools are for.
> >
> > There is a third stealth-objective that CFN has caused to linger in
> > Heat. That is "packaging cloud applications". By allowing the 100%
> > concrete CFN template to stand alone, users can "ship" the template.
> >
> > IMO this marrying of software assembly, config, and orchestration is a
> > concern unto itself, and best left outside of the core infrastructure
> > orchestration system.
> >
> > > I work closely with some colleagues who have a particular software
> > > orchestration technology they call Weaver.  It takes as input for one
> > > deployment not a single monolithic template but rather a collection of
> > > modules.  Like higher level constructs in programming languages, these
> > > have some independence and can be re-used in various combinations and
> > > ways.  Weaver has a compiler that weaves together the given modules to
> > > form a monolithic model.  In fact, the input is a modular Ruby program,
> 
> > > and the Weaver compiler is essentially running that Ruby program; this
> > > program produces the monolithic model as a side effect.  Ruby is a
> pretty
> > > good language in which to embed a domain-specific language, and my
> > > colleagues have done this.  The modular Weaver input mostly looks
> > > declarative, but you can use Ruby to reduce the verboseness of, e.g.,
> > > repetitive stuff --- as well as plain old modularity with abstraction.
> We
> > > think the modular Weaver input is much more compact and better for
> human
> > > reading and writing than plain old CFN.  This might not be obvious when
> 
> > > you are doing the "hello world" example, but when you get to realistic
> > > examples it becomes clear.
> > >
> > > The Weaver input discusses infrastructure issues, in the rich way Debo
> and
> > > I have been advocating, as well as software.  For this reason I
> describe
> > > it as an integrated model (integrating software and infrastructure
> > > issues).  I hope for HOT to evolve to be similarly expressive to the
> > > monolithic integrated model produced by the Weaver compiler.
> 
> I don't fully get this idea of HOT consuming a monolithic model produced by
> some compiler - be it Weaver or anything else.
> I thought the goal was to develop HOT in a way that users can actually
> write HOT, as opposed to having to use some "compiler" to produce some
> useful model.
> So wouldn't it make sense to make sure we add the right concepts to HOT to
> make sure we are able to express what we want to express and have things
> like composability, re-use, substitutability?
> 

We saw this in the history of puppet in fact, where the DSL was always the
problem when trying to make less-than-obvious components, and eventually
puppet had to grow a full "ruby dsl" to avoid those mistakes and keep up
with Chef's language-first approach.

> > >
> >
> > Indeed, we're dealing with this ver

Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse (now featuring software orchestration)

2013-09-25 Thread Thomas Spatzier
Excerpt from Clint's mail on 25.09.2013 22:23:07:

>
> I think we already have some summit suggestions for discussing HOT,
> it would be good to come prepared with some visions for the future
> of HOT so that we can hash these things out, so I'd like to see this
> discussion continue.

Absolutely! Can those involved in the discussion check if this seems to be
covered in one of the session proposal me or others posted recently, and if
not raise another proposal? This is a good one to have.

>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse (now featuring software orchestration)

2013-09-26 Thread Steven Hardy
On Wed, Sep 25, 2013 at 11:04:54PM +0200, Thomas Spatzier wrote:
> Excerpt from Clint's mail on 25.09.2013 22:23:07:
> 
> >
> > I think we already have some summit suggestions for discussing HOT,
> > it would be good to come prepared with some visions for the future
> > of HOT so that we can hash these things out, so I'd like to see this
> > discussion continue.
> 
> Absolutely! Can those involved in the discussion check if this seems to be
> covered in one of the session proposal me or others posted recently, and if
> not raise another proposal? This is a good one to have.

There is already a general "HOT Discussion" proposal:

http://summit.openstack.org/cfp/details/78

I'd encourage everyone with HOT functionality they'd like to discuss to
raise a blueprint, with a linked wiki page (or etherpad), then link the BP
as a comment to that session proposal.

That way we can hopefully focus the session when discussing the HOT roadmap
and plans for Icehouse.

As in Portland, I expect we'll need breakout sessions in addition to this,
but we can organize that with those interested during the summit.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse (now featuring software orchestration)

2013-09-27 Thread Zane Bitter

On 25/09/13 07:03, Mike Spreitzer wrote:


Zane wrote:
 > To take the first example, wouldn't your holistic scheduler
effectively have
 > to reserve a compute instance and some directly attached block
storage prior
 > to actually creating them? Have you considered Climate rather than
Heat as
 > an integration point?

I had not considered Climate.  Based on recent ML traffic, I see that
Climate is about scheduling into the future, whereas I am only trying to
talk about scheduling for the present.  OTOH, perhaps you are concerned
about concurrency issues.  I am too.  Doing a better job on that is a
big part of the revision my group is working on now.  I think it can be
done.  I plan to post a pointer to some details soon.


Your diagrams clearly show scheduling happening in a separate stage to 
(infrastructure) orchestration, which is to say that at the point where 
resources are scheduled, their actual creation is in the *future*.


I am not a Climate expert, but it seems to me that they have a 
near-identical problem to solve: how do they integrate with Heat such 
that somebody who has reserved resources in the past can actually create 
them (a) as part of a Heat stack or (b) as standalone resources, at the 
user's option. IMO OpenStack should solve this problem only once.



Perhaps the concern is about competition between two managers trying to
manage the same resources.  I think that is (a) something that can not
be completely avoided and (b) impossible to do well.  My preference is
to focus on one manager, and make sure it tolerates surprises in a way
that is not terrible.  Even without competing managers, bugs and other
unexpected failures will cause nasty surprises.

Zane later wrote:
 > As proposed, the software configs contain directives like 'hosted_on:
 > server_name'. (I don't know that I'm a huge fan of this design, but I
don't
 > think the exact details are relevant in this context.) There's no
 > non-trivial processing in the preparatory stage of software orchestration
 > that would require it to be performed before scheduling could occur.

I hope I have addressed that with my remarks above about software
orchestration.


If I understood your remarks correctly, we agree that there is no 
(known) reason that the scheduling has to occur in the middle of 
orchestration (which would have implied that it needed to be 
incorporated in some sense into Heat).



Zane also wrote:
 > Let's make sure we distinguish between doing holistic scheduling, which
 > requires a priori knowledge of the resources to be created, and automatic
 > scheduling, which requires psychic knowledge of the user's mind. (Did the
 > user want to optimise for performance or availability? How would you
infer
 > that from the template?)

One reason I favor holistic infrastructure scheduling is that I want its
input to be richer than today's CFN templates.  Like Debo, I think the
input can contain the kind of information that would otherwise require
mind-reading.  My group has been working examples involving multiple
levels of anti-co-location statements, network reachability and
proximity statements, disk exclusivity statements, and statements about
the presence of licensed products.


Right, so what I'm saying is that if all those things are _stated_ in 
the input then there's no need to run the orchestration engine to find 
out what they'll be; they're already stated.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse (now featuring software orchestration)

2013-09-27 Thread Mike Spreitzer
Zane Bitter  wrote on 09/27/2013 08:24:49 AM:

> Your diagrams clearly show scheduling happening in a separate stage to 
> (infrastructure) orchestration, which is to say that at the point where 
> resources are scheduled, their actual creation is in the *future*.
> 
> I am not a Climate expert, but it seems to me that they have a 
> near-identical problem to solve: how do they integrate with Heat such 
> that somebody who has reserved resources in the past can actually create 

> them (a) as part of a Heat stack or (b) as standalone resources, at the 
> user's option. IMO OpenStack should solve this problem only once.

If I understand correctly, what Climate adds to the party is planning 
allocations to happen at some specific time in the non-immediate future. A 
holistic infrastructure scheduler is planning allocations to happen just 
as soon as we can get the plans through the relevant code path, which is 
why I describe it as "now".


> If I understood your remarks correctly, we agree that there is no 
> (known) reason that the scheduling has to occur in the middle of 
> orchestration (which would have implied that it needed to be 
> incorporated in some sense into Heat).

If you agree that by orchestration you meant specifically infrastructure 
orchestration then we are agreed.  If software orchestration is also in 
the picture then I also agree that holistic infrastructure scheduling does 
not *have to* go in between software orchestration and infrastructure 
orchestration --- but I think that's a pretty good place for it.


> Right, so what I'm saying is that if all those things are _stated_ in 
> the input then there's no need to run the orchestration engine to find 
> out what they'll be; they're already stated.

Yep.

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse (now featuring software orchestration)

2013-09-27 Thread Mike Spreitzer
Sorry, I was a bit too hasty in writing the last part of my last message; 
I forgot to qualify "software orchestration" to indicate I am speaking 
only of its preparatory phase.  I should have written:

Zane Bitter  wrote on 09/27/2013 08:24:49 AM:

...
> If I understood your remarks correctly, we agree that there is no 
> (known) reason that the scheduling has to occur in the middle of 
> orchestration (which would have implied that it needed to be 
> incorporated in some sense into Heat). 

If you agree that by orchestration you meant specifically infrastructure 
orchestration then we are agreed.  If software orchestration preparation 
is also in the picture then I also agree that holistic infrastructure 
scheduling does not *have to* go in between software orchestration 
preparation and infrastructure orchestration --- but I think that's a 
pretty good place for it.

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse (now featuring software orchestration)

2013-09-30 Thread Georgy Okrokvertskhov
Hi,

I am working on the OpenStack project Murano which actually had to solve
the same problem with software level orchestration. Right now Murano has a
DSL language which allows you to define a workflow for a complex service
deployment.
Murano uses Heat for infrastructure managements and actually there is a
part of DSL language which allows you to generate Heat template for
deployment.

This is a native to OpenStack project written in Python following all
OpenStack community rules. Before creating Murano we evaluated different
software orchestrators like SaltStack, Chef and Puppet+mcollective. All of
them have capabilities for software management but all of them are not
native to OpenStack. I think it will be quite reasonable to have something
under full control of OpenStack community then use something which is not
native (even in programming language) to OpenStack.

Here is a link to the project overview:
https://wiki.openstack.org/wiki/Murano/ProjectOverview

Right now Murano is concentrated on Windows services management but we also
working on Linux Agent to allow Linux software configuration too.

When do you have a meeting for HOT software configuration discussion? I
think we can add value here for Heat as we have already required components
for software orchestration with full integration with OpenStack and
Keystone in particular.

Thanks
Georgy




On Fri, Sep 27, 2013 at 7:15 AM, Mike Spreitzer  wrote:

> Sorry, I was a bit too hasty in writing the last part of my last message;
> I forgot to qualify "software orchestration" to indicate I am speaking only
> of its preparatory phase.  I should have written:
>
> Zane Bitter  wrote on 09/27/2013 08:24:49 AM:
>
> ...
>
> > If I understood your remarks correctly, we agree that there is no
> > (known) reason that the scheduling has to occur in the middle of
> > orchestration (which would have implied that it needed to be
> > incorporated in some sense into Heat).
>
>
> If you agree that by orchestration you meant specifically infrastructure
> orchestration then we are agreed.  If software orchestration preparation is
> also in the picture then I also agree that holistic infrastructure
> scheduling does not *have to* go in between software orchestration
> preparation and infrastructure orchestration --- but I think that's a
> pretty good place for it.
>
> Regards,
> Mike
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Georgy Okrokvertskhov
Technical Program Manager,
Cloud and Infrastructure Services,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse (now featuring software orchestration)

2013-09-30 Thread Clint Byrum
Excerpts from Georgy Okrokvertskhov's message of 2013-09-30 11:44:26 -0700:
> Hi,
> 
> I am working on the OpenStack project Murano which actually had to solve
> the same problem with software level orchestration. Right now Murano has a
> DSL language which allows you to define a workflow for a complex service
> deployment.
> Murano uses Heat for infrastructure managements and actually there is a
> part of DSL language which allows you to generate Heat template for
> deployment.
> 
> This is a native to OpenStack project written in Python following all
> OpenStack community rules. Before creating Murano we evaluated different
> software orchestrators like SaltStack, Chef and Puppet+mcollective. All of
> them have capabilities for software management but all of them are not
> native to OpenStack. I think it will be quite reasonable to have something
> under full control of OpenStack community then use something which is not
> native (even in programming language) to OpenStack.
> 
> Here is a link to the project overview:
> https://wiki.openstack.org/wiki/Murano/ProjectOverview
> 
> Right now Murano is concentrated on Windows services management but we also
> working on Linux Agent to allow Linux software configuration too.
> 

Hi!

We've written some very basic tools to do server configuration for the
OpenStack on OpenStack (TripleO) Deployment program. Hopefully we can
avert you having to do any duplicate work and join forces.

Note that configuring software and servers is not one job. The tools we
have right now:

os-collect-config - agent to collect data from config sources and trigger
commands on changes. [1]

os-refresh-config - run scripts to manage state during config changes
(run-parts but more structured) [2]

os-apply-config - write config files [3]

[1] http://pypi.python.org/pypi/os-collect-config
[2] http://pypi.python.org/pypi/os-refresh-config
[3] http://pypi.python.org/pypi/os-apply-config

We do not have a tool to do run-time software installation, because we
are working on an image based deployment method (thus building images
with diskimage-builder).  IMO, there are so many good tools already
written that get this job done, doing one just for the sake of it being
OpenStack native is a low priority.

However, a minimal thing is needed for Heat users so they can use it to
install those better tools for ongoing run-time configuration. cfn-init
is actually pretty good. Its only crime other than being born of Amazon
is that it also does a few other jobs, namely file writing and service
management.

Anyway, before you run off and write an agent, I hope you will take a look
at os-collect-config and considering using it. For the command to run, I
recommend os-refresh-config as you can have it run a progression of config
tools. For what to run in the configuration step of os-refresh-config,
cfn-init would work, however there is a blueprint for a native interface
that might be a bit different here:

https://blueprints.launchpad.net/heat/+spec/native-tools-bootstrap-config

> When do you have a meeting for HOT software configuration discussion? I
> think we can add value here for Heat as we have already required components
> for software orchestration with full integration with OpenStack and
> Keystone in particular.

Heat meets at 2000 UTC every Wednesday.

TripleO meets at 2000 UTC every Tuesday.

Hope to see you there!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse (now featuring software orchestration)

2013-10-01 Thread Robert Collins
On 1 October 2013 19:31, Clint Byrum  wrote:
> TripleO meets at 2000 UTC every Tuesday.

1900UTC.

:)

-Rob
-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse (now featuring software orchestration)

2013-10-01 Thread Sylvain Bauza

Hi Mike and Zane,

Le 27/09/2013 15:58, Mike Spreitzer a écrit :

Zane Bitter  wrote on 09/27/2013 08:24:49 AM:

> Your diagrams clearly show scheduling happening in a separate stage to
> (infrastructure) orchestration, which is to say that at the point where
> resources are scheduled, their actual creation is in the *future*.
>
> I am not a Climate expert, but it seems to me that they have a
> near-identical problem to solve: how do they integrate with Heat such
> that somebody who has reserved resources in the past can actually 
create

> them (a) as part of a Heat stack or (b) as standalone resources, at the
> user's option. IMO OpenStack should solve this problem only once.

If I understand correctly, what Climate adds to the party is planning 
allocations to happen at some specific time in the non-immediate 
future.  A holistic infrastructure scheduler is planning allocations 
to happen just as soon as we can get the plans through the relevant 
code path, which is why I describe it as "now".




Climate is wide-scoped aiming to exclusively reserve any kind of 
resources by a certain time. This generic sentence doesn't mean Climate 
can't schedule things 'now': you can ask for an immediate lease 
(starting 'now') and youwill get the resources as of now.


Climate team is actually split into two different teams, one focusing on 
hardware procurement and one focusing of virtual procurement. I can't 
speak on behalf of the 'Climate Virtual' team, but I would bet 
scheduling an Heat stack or aSavanna cluster will require some kind of 
holistic DSL, indeed.


From the 'Climate Physical' POV, that could even be necessary, 

butyetunclear at the moment.

-Sylvain



> If I understood your remarks correctly, we agree that there is no
> (known) reason that the scheduling has to occur in the middle of
> orchestration (which would have implied that it needed to be
> incorporated in some sense into Heat).

If you agree that by orchestration you meant specifically 
infrastructure orchestration then we are agreed.  If software 
orchestration is also in the picture then I also agree that holistic 
infrastructure scheduling does not *have to* go in between software 
orchestration and infrastructure orchestration --- but I think that's 
a pretty good place for it.



> Right, so what I'm saying is that if all those things are _stated_ in
> the input then there's no need to run the orchestration engine to find
> out what they'll be; they're already stated.

Yep.

Thanks,
Mike


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse (now featuring software orchestration)

2013-10-01 Thread Thomas Spatzier
Clint Byrum  wrote on 01.10.2013 08:31:44 - Excerpt:

> From: Clint Byrum 
> To: openstack-dev ,
> Date: 01.10.2013 08:33
> Subject: Re: [openstack-dev] [heat] [scheduler] Bringing things
> together for Icehouse (now featuring software orchestration)
>
> Excerpts from Georgy Okrokvertskhov's message of 2013-09-30 11:44:26
-0700:
> > Hi,
> >
> > I am working on the OpenStack project Murano which actually had to
solve
> > the same problem with software level orchestration. Right now Murano
has a
> > DSL language which allows you to define a workflow for a complex
service
> > deployment.
> > ...
> >
>
> Hi!
>
> We've written some very basic tools to do server configuration for the
> OpenStack on OpenStack (TripleO) Deployment program. Hopefully we can
> avert you having to do any duplicate work and join forces.
>
> Note that configuring software and servers is not one job. The tools we
> have right now:
>
> os-collect-config - agent to collect data from config sources and trigger
> commands on changes. [1]
>
> os-refresh-config - run scripts to manage state during config changes
> (run-parts but more structured) [2]
>
> os-apply-config - write config files [3]
>
> [1] http://pypi.python.org/pypi/os-collect-config
> [2] http://pypi.python.org/pypi/os-refresh-config
> [3] http://pypi.python.org/pypi/os-apply-config
>
> We do not have a tool to do run-time software installation, because we
> are working on an image based deployment method (thus building images
> with diskimage-builder).  IMO, there are so many good tools already
> written that get this job done, doing one just for the sake of it being
> OpenStack native is a low priority.
>
> However, a minimal thing is needed for Heat users so they can use it to
> install those better tools for ongoing run-time configuration. cfn-init
> is actually pretty good. Its only crime other than being born of Amazon
> is that it also does a few other jobs, namely file writing and service
> management.

Right, there has been some discussion going on to find the right level of
software orchestration to go into Heat. As Clint said, there are a couple
of things out there already, like what the tripleO project has been doing.
And there are proposals / discussions going on to see who users could
include some level of software orchestration into HOT, e.g.

https://wiki.openstack.org/wiki/Heat/Software-Configuration-Provider

and how such constructs in HOT would align with assets already out there.
So Georgy's item is another one in that direction and it would be good to
find some common denominator.

>
> Anyway, before you run off and write an agent, I hope you will take a
look
> at os-collect-config and considering using it. For the command to run, I
> recommend os-refresh-config as you can have it run a progression of
config
> tools. For what to run in the configuration step of os-refresh-config,
> cfn-init would work, however there is a blueprint for a native interface
> that might be a bit different here:
>
> https://blueprints.launchpad.net/heat/+spec/native-tools-bootstrap-config
>
> > When do you have a meeting for HOT software configuration discussion? I
> > think we can add value here for Heat as we have already required
components
> > for software orchestration with full integration with OpenStack and
> > Keystone in particular.
>
> Heat meets at 2000 UTC every Wednesday.
>
> TripleO meets at 2000 UTC every Tuesday.
>
> Hope to see you there!

In addition, it looks like there will be some design sessions on that topic
at the HK summit, so if you happen to be there that could be another good
chance to talk.

>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse (now featuring software orchestration)

2013-10-02 Thread Mike Spreitzer
FYI, I have refined my pictures at 
https://docs.google.com/drawings/d/1Y_yyIpql5_cdC8116XrBHzn6GfP_g0NHTTG_W4o0R9U 
and 
https://docs.google.com/drawings/d/1TCfNwzH_NBnx3bNz-GQQ1bRVgBpJdstpu0lH_TONw6g 
to hopefully make it clearer that I agree with the sentiment that holistic 
infrastructure scheduling should not be part of heat but is closely 
related, and to make a graphical illustration of why I prefer the ordering 
of functionality that I do (the boundary between software and 
infrastructure issues gets less squiggly).

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev