Re: [openstack-dev] [heat-translator] Nominating new core reviewers

2015-08-17 Thread Thomas Spatzier
+1 on both from my side.

Regards,
Thomas

 From: Sahdev P Zala spz...@us.ibm.com
 To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org
 Date: 17/08/2015 21:38
 Subject: [openstack-dev]  [heat-translator] Nominating new core reviewers

 Hello,

 I am glad to nominate Vahid Hashemian [1] and Srinivas Tadepalli [2]
 for the Heat-Translator core reviewers team.
 Both of them have been providing significant contribution,
 development and review, since the beginning of this year and knows
 code base well.
 Existing cores, please reply this email by end of this week with
 your vote +1/-1 for their addition to the team.
 Review stats:
http://stackalytics.com/report/contribution/heat-translator/90
 [1] https://review.openstack.org/#/q/reviewer:%22Vahid+Hashemian+%
 253Cvahidhashemian%2540us.ibm.com%253E%22,n,z
 [2] https://review.openstack.org/#/q/reviewer:%22srinivas_tadepalli
 +%253Csrinivas.tadepalli%2540tcs.com%253E%22,n,z
 Regards,
 Sahdev Zala
 PTL, Heat-Translator

__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposing Kanagaraj Manickam and Ethan Lynn forheat-core

2015-07-31 Thread Thomas Spatzier
+1 on both from me!

Cheers,
Thomas

 From: Steve Baker sba...@redhat.com
 To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org
 Date: 31/07/2015 06:37
 Subject: [openstack-dev] Proposing Kanagaraj Manickam and Ethan Lynn
 for heat-core

 I believe the heat project would benefit from Kanagaraj Manickam and
 Ethan Lynn having the ability to approve heat changes.

 Their reviews are valuable[1][2] and numerous[3], and both have been
 submitting useful commits in a variety of areas in the heat tree.

 Heat cores, please express your approval with a +1 / -1.

 [1] http://stackalytics.com/?user_id=kanagaraj-manickammetric=marks
 [2] http://stackalytics.com/?user_id=ethanlynnmetric=marks
 [3] http://stackalytics.com/report/contribution/heat-group/90


__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][tripleo]Recursive validation for easier composability

2015-06-22 Thread Thomas Spatzier
 From: Jay Dobies jason.dob...@redhat.com
 To: openstack-dev@lists.openstack.org
 Date: 22/06/2015 19:22
 Subject: Re: [openstack-dev] [heat][tripleo]Recursive validation for
 easier composability



 On 06/22/2015 12:19 PM, Steven Hardy wrote:
  Hi all,
 
  Lately I've been giving some thought to how we might enable easier
  composability, and in particular how we can make it easier for folks to
  plug in deeply nested optional extra logic, then pass data in via
  parameter_defaults to that nested template.
 
  Here's an example of the use-case I'm describing:
 
  https://review.openstack.org/#/c/193143/5/environments/cinder-
 netapp-config.yaml
 
  Here, we want to allow someone to easily turn on an optional
configuration
  or feature, in this case a netapp backend for cinder.

 I think the actual desired goal is bigger than just optional
 configuration. I think it revolves more around choosing a nested stack
 implementation for a resource type and how to manage custom parameters
 for that implementation. We're getting into the territory here of having
 a parent stack defining an API that nested stacks can plug into. I'd
 like to have some sort of way of deriving that information instead of
 having it be completely relegated to outside documentation (but I'm
 getting off topic; at the end I mention how I want to do a better write
 up of the issues Tuskar has faced and I'll elaborate more there).

FWIW, adding a thought from my TOSCA background where we've been looking at
something similar, namely selecting a nested templates that declares to be
matching an interfaces consumed in a parent template (that's how I
understood Jay's words above). In TOSCA, there is a more type-safe kind of
template nesting, where nested templates do not just bring new resource
types into existence depending on what parameters they expose, but there is
a strict contract on the interface a nested template must fulfil - see [1],
and especially look for substitution_mapping.

Admittedly, this is not exactly the same as Steven's original problem, but
kind of related. And IIRC, some time back there was some discussion around
introduction of some kind of interface for HOT templates. So wanted to
bring this in and give it some thought whether something like this would
make sense for Heat.

[1]
http://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-YAML/v1.0/csd03/TOSCA-Simple-Profile-YAML-v1.0-csd03.html#_Toc419746122

Regards,
Thomas


  The parameters specific to this feature/configuration only exist in the
  nested cinder-netapp-config.yaml template, then parameter_defaults are
used
  to wire in the implementation specific data without having to pass the
  values through every parent template (potentially multiple layers of
  nesting).
 
  This approach is working out OK, but we're missing an interface which
makes
  the schema for parameters over the whole tree available.
  
  This is obviously
  a problem, particularly for UI's, where you really need a clearly
defined
  interface for what data is required, what type it is, and what valid
values
  may be chosen.

 I think this is going to be an awesome addition to Heat. As you alluded
 to, we've struggled with this in TripleO. The parameter_defaults works
 to circumvent the parameter passing, but it's rough from a user
 experience point of view since getting the unified list of what's
 configurable is difficult.

  I'm considering an optional additional flag to our template-validate
API
  which allows recursive validation of a tree of templates, with the data
  returned on success to include a tree of parameters, e.g:
 
  heat template-validate -f parent.yaml -e env.yaml --show-nested
  {
 Description: The Parent,
 Parameters: {
   ParentConfig: {
 Default: [],
 Type: Json,
 NoEcho: false,
 Description: ,
 Label: ExtraConfig
   },
   ControllerFlavor: {
 Type: String,
 NoEcho: false,
 Description: ,
 Label: ControllerFlavor
   }
 }
NestedParameters: {
   child.yaml: {
   Parameters: {
 ChildConfig: {
 Default: [],
 Type: Json,
 NoEcho: false,
 Description: ,
 Label: Child ExtraConfig
 }
   }
}
  }

 Are you intending on resolving parameters passed into a nested stack
 from the parent against what's defined in the nested stack's parameter
 list? I'd want NestedParameters to only list things that aren't already
 being specified to the parent.

 Specifically with regard to the TripleO Heat templates, there is still a
 lot of logic that needs to be applied to properly divide out parameters.
 For example, there are some things passed in from the parents to the
 nested stacks that are kinda namespaced by convention, but its not a
 hard convention. So to try to group the parameters by service, we'd have
 to look at a particular NestedParameters section and then also add in
 

Re: [Openstack] Fn::FindInMap gives error in Heat HOT Template

2015-02-11 Thread Thomas Spatzier
Hi Khayam,

the Fn::FindInMap is not supported in HOT as far as I can see in the code.

The list of functions supported in the HOT version you are using is defined
in this part of the code:
https://github.com/openstack/heat/blob/master/heat/engine/hot/template.py#L274

The list for version 2013-05-23 can be found in the same file.
I think this should also be covered in docs that get generated from the
sources, but I do not have a link at hand right now.

Regards,
Thomas

 From: Khayam Gondal khayam.gon...@gmail.com
 To: openstack@lists.openstack.org openstack@lists.openstack.org
 Date: 11/02/2015 07:03
 Subject: [Openstack] Fn::FindInMap gives error in Heat HOT Template

 In my HOT template heat_template_version: 2014-10-16 I have a
 autoscaling group as
 auto_scale_server:
 type: OS::Heat::AutoScalingGroup
 properties:
   min_size: 0
   max_size: { Fn::FindInMap : [ mirror_map, { Ref :
Mirror } ]}
   resource:
   type: OS::Nova::Server
   properties:
 name: Scaled_Blade
 image: UbuntuDemo
 flavor: g1.disk
 key_name: htor
 networks: [{network: internal}]
 where value of max_size depends upon Property Mirror
 In Parameters section
 parameters:

   Mirror:
 type: string
 label: Mirroring Port
 description: Select the Port on which you want to Mirror the
traffic
 constraints:
  - allowed_values:
- port1
- fm00

  Mappings:
 mirror_map:
   fm00: 0
   port1: 1
 When I start this template, I got error
 ERROR: Invalid key 'mirror_map' for parameter (Mappings)

 P.S I had also changed  HOT template version to '2013-05-23' but no luck
 ___
 Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [Heat][Keystone] Native keystone resources in Heat

2015-01-29 Thread Thomas Spatzier
 From: Zane Bitter zbit...@redhat.com
 To: openstack Development Mailing List
openstack-dev@lists.openstack.org
 Date: 29/01/2015 17:47
 Subject: [openstack-dev] [Heat][Keystone] Native keystone resources in
Heat

 I got a question today about creating keystone users/roles/tenants in
 Heat templates. We currently support creating users via the
 AWS::IAM::User resource, but we don't have a native equivalent.

 IIUC keystone now allows you to add users to a domain that is otherwise
 backed by a read-only backend (i.e. LDAP). If this means that it's now
 possible to configure a cloud so that one need not be an admin to create
 users then I think it would be a really useful thing to expose in Heat.
 Does anyone know if that's the case?

 I think roles and tenants are likely to remain admin-only, but we have
 precedent for including resources like that in /contrib... this seems
 like it would be comparably useful.

 Thoughts?

I am really not a keystone expert, so don't know what the security
implications would be, but I have heard the requirement or wish to be able
to create users, roles etc. from a template many times. I've talked to
people who want to explore this for onboarding use cases, e.g. for
onboarding of lines of business in a company, or for onboarding customers
in a public cloud case. They would like to be able to have templates that
lay out the overall structure for authentication stuff, and then
parameterize it for each onboarding process.
If this is something to be enabled, that would be interesting to explore.

Regards,
Thomas


 cheers,
 Zane.


__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] core team changes

2015-01-28 Thread Thomas Spatzier
+1 on all changes.

Regards,
Thomas

 From: Angus Salkeld asalk...@mirantis.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 28/01/2015 02:40
 Subject: [openstack-dev] [Heat] core team changes

 Hi all

 After having a look at the stats:
 http://stackalytics.com/report/contribution/heat-group/90
 http://stackalytics.com/?module=heat-groupmetric=person-day

 I'd like to propose the following changes to the Heat core team:

 Add:
 Qiming Teng
 Huang Tianhua

 Remove:
 Bartosz Górski (Bartosz has indicated that he is happy to be removed
 and doesn't have the time to work on heat ATM).

 Core team please respond with +/- 1.

 Thanks
 Angus

__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][tripleo] Making diskimage-builder install from forked repo?

2015-01-13 Thread Thomas Spatzier
 From: Steve Baker sba...@redhat.com
 To: openstack-dev@lists.openstack.org
 Date: 12/01/2015 21:16
 Subject: Re: [openstack-dev] [heat][tripleo] Making diskimage-
 builder install from forked repo?

 On 09/01/15 07:06, Gregory Haynes wrote:
  Excerpts from Steven Hardy's message of 2015-01-08 17:37:55 +:
  Hi all,
 
  I'm trying to test a fedora-software-config image with some updated
  components.  I need:
 
  - Install latest master os-apply-config (the commit I want isn't
released)
  - Install os-refresh-config fork from https://
 review.openstack.org/#/c/145764
 
  I can't even get the o-a-c from master part working:
 
  export PATH=${PWD}/dib-utils/bin:$PATH
  export
  ELEMENTS_PATH=tripleo-image-elements/elements:heat-templates/hot/
 software-config/elements
  export DIB_INSTALLTYPE_os_apply_config=source
 
  diskimage-builder/bin/disk-image-create vm fedora selinux-permissive \
 os-collect-config os-refresh-config os-apply-config \
 heat-config-ansible \
 heat-config-cfn-init \
 heat-config-docker \
 heat-config-puppet \
 heat-config-salt \
 heat-config-script \
 ntp \
 -o fedora-software-config.qcow2
 
  This is what I'm doing, both tools end up as pip installed versions
AFAICS,
  so I've had to resort to manually hacking the image post-DiB using
  virt-copy-in.
 
  Pretty sure there's a way to make DiB do this, but don't know what,
anyone
  able to share some clues?  Do I have to hack the elements, or is there
a
  better way?
 
  The docs are pretty sparse, so any help would be much appreciated! :)
 
  Thanks,
 
  Steve
 
  Hey Steve,
 
  source-repositories is your friend here :) (check out
  dib/elements/source-repositires/README). One potential gotcha is that
  because source-repositires is an element it really only applies to
tools
  used within images (and os-apply-config is used outside the image). To
  fix this we have a shim in tripleo-incubator/scripts/pull-tools which
  emulates the functionality of source-repositories.
 
  Example usage:
 
  * checkout os-apply-config to the ref you wish to use
  * export DIB_REPOLOCATION_os_apply_config=/path/to/oac
  * export DIB_REPOREF_os_refresh_config=refs/changes/64/145764/1
  * start your devtesting
 
 
 The good news is that devstack is already set up to do this. When
 HEAT_CREATE_TEST_IMAGE=True devstack will build packages from the
 currently checked-out os-*-config tools, build a pip repo and configure
 apache to serve it.

Very cool! Didn't know about it. Thanks for sharing this information :-)


 Then the elements *should* install from these packages - we're not
 gating on this functionality (yet) so its possible it has regressed but
 shouldn't be too hard to get going again.




__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How can I write at milestone section of blueprint?

2014-12-19 Thread Thomas Spatzier
Hi Yasunori,

you can submit a blueprint spec as a gerrit review to the heat-specs
repository [1].
I would suggest to have a look at some existing specs that already got
accepted to have an example for the format, important sections etc.

All kilo related specs are in a kilo sub-directory in the repo, and the
proposed milestone is mentioned in the spec itself.

[1] https://github.com/openstack/heat-specs

Regards,
Thomas


 From: Yasunori Goto y-g...@jp.fujitsu.com
 To: Openstack-Dev-ML openstack-dev@lists.openstack.org
 Date: 19/12/2014 09:05
 Subject: [openstack-dev] [Heat] How can I write at milestone section
 of blueprint?


 Hello,

 This is the first mail at Openstack community,
 and I have a small question about how to write blueprint for Heat.

 Currently our team would like to propose 2 interfaces
 for users operation in HOT.
 (One is Event handler which is to notify user's defined event to heat.
  Another is definitions of action when heat catches the above
notification.)
 So, I'm preparing the blueprint for it.

 However, I can not find how I can write at the milestone section of
blueprint.

 Heat blueprint template has a section for Milestones.
 Milestones -- Target Milestone for completeion:

 But I don't think I can decide it by myself.
 In my understanding, it should be decided by PTL.

 In addition, probably the above our request will not finish
 by Kilo. I suppose it will be L version or later.

 So, what should I write at this section?
 Kilo-x, L version, or empty?

 Thanks,

 --
 Yasunori Goto y-g...@jp.fujitsu.com



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] image requirements for Heat software config

2014-10-15 Thread Thomas Spatzier
Excerpts from Clint Byrum's message on 14/10/2014 23:38:46:
snip
 
  Regarding the process of building base images, the currently documented
way
  [1] of using diskimage-builder turns out to be a bit unstable
sometimes.
  Not because diskimage-builder is unstable, but probably because it
pulls in
  components from a couple of sources:
  #1 we have a dependency on implementation of the Heat engine of course
(So
  this is not pulled in to the image building process, but the dependency
is
  there)
  #2 we depend on features in python-heatclient (and other python-*
clients)
  #3 we pull in implementation from the heat-templates repo
  #4 we depend on tripleo-image-elements
  #5 we depend on os-collect-config, os-refresh-config and
os-apply-config
  #6 we depend on diskimage-builder itself
 
  Heat itself and python-heatclient are reasonably well in synch because
  there is a release process for both, so we can tell users with some
  certainty that a feature will work with release X of OpenStack and Heat
and
  version x.z.y of python-heatclient. For the other 4 sources, success
  sometimes depends on the time of day when you try to build an image
  (depending on what changes are currently included in each repo). So
  basically there does not seem to be a consolidated release process
across
  all that is currently needed for software config.
 

 I don't really understand why a consolidated release process across
 all would be desired or needed.

Well, all pieces have to fit together so everything work. I had many
situations where I used the currently up-to-date version of each piece but
something just did not work. Then I found that some patch was in review on
any of those, so trying a few days later worked.
I would be good for users to have one verified package of everything
instead of going thru a trial and error process.
Maybe this is going to improve in the future, since so far or until
recently a lot of software config was still work in progress. But up to
now, the image building has been a challenge at some time.


 #3 is pretty odd. You're pulling in templates from the examples repo?

We have to pull in the image elements and hooks for software config from
there.


 For #4-#6, those are all on pypi and released on a regular basis. Build
 yourself a bandersnatch mirror and you'll have locally controlled access
 to them which should eliminate any reliability issues.

So switching from git repo based installed as described in [1] to pypi
based installs, where I can specify a version number would help?
Then what we would still need is a set of version for each package that are
verified to work together (my previous point).


  The ideal solution would be to have one self-contained package that is
easy
  to install on various distributions (an rpm, deb, MSI ...).
  Secondly, it would be ideal to not have to bake additional things into
the
  image but doing bootstrapping during instance creation based on an
existing
  cloud-init enabled image. For that we would have to strip requirements
down
  to a bare minimum required for software config. One thing that comes to
my
  mind is the cirros software config example [2] that Steven Hardy
created.
  It is admittedly no up to what one could do with an image built
according
  to [1] but on the other hand is really slick, whereas [1] installs a
whole
  set of things into the image (some of which do not really seem to be
needed
  for software config).

 The agent problem is one reason I've been drifting away from Heat
 for software configuration, and toward Ansible. Mind you, I wrote
 os-collect-config to have as few dependencies as possible as one attempt
 around this problem. Still it isn't capable enough to do the job on its
 own, so you end up needing os-apply-config and then os-refresh-config
 to tie the two together.

 Ansible requires sshd, and python, with a strong recommendation for
 sudo. These are all things that pretty much every Linux distribution is
 going to have available.

Interesting, I have to investigate this. Thanks for the hint.


 
  Another issue that comes to mind: what about operating systems not
  supported by diskimage-builder (Windows), or other hypervisor
platforms?
 

 There is a windows-diskimage-builder:

 https://git.openstack.org/cgit/stackforge/windows-diskimage-builder

Good to know; I wasn't aware of it. Thanks!


 diskimage-builder can produce raw images, so that should be convertible
 to pretty much any other hypervisor's preferred disk format.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] image requirements for Heat software config

2014-10-15 Thread Thomas Spatzier
Excerpts from Steve Baker's message on 14/10/2014 23:52:41:

snip
 Regarding the process of building base images, the currently documented
way
 [1] of using diskimage-builder turns out to be a bit unstable sometimes.
 Not because diskimage-builder is unstable, but probably because it pulls
in
 components from a couple of sources:
 #1 we have a dependency on implementation of the Heat engine of course
(So
 this is not pulled in to the image building process, but the dependency
is
 there)
 #2 we depend on features in python-heatclient (and other python-*
clients)
 #3 we pull in implementation from the heat-templates repo
 #4 we depend on tripleo-image-elements
 #5 we depend on os-collect-config, os-refresh-config and os-apply-config
 #6 we depend on diskimage-builder itself

 Heat itself and python-heatclient are reasonably well in synch because
 there is a release process for both, so we can tell users with some
 certainty that a feature will work with release X of OpenStack and Heat
and
 version x.z.y of python-heatclient. For the other 4 sources, success
 sometimes depends on the time of day when you try to build an image
 (depending on what changes are currently included in each repo). So
 basically there does not seem to be a consolidated release process across
 all that is currently needed for software config.

 The ideal solution would be to have one self-contained package that is
easy
 to install on various distributions (an rpm, deb, MSI ...).
 Secondly, it would be ideal to not have to bake additional things into
the
 image but doing bootstrapping during instance creation based on an
existing
 cloud-init enabled image. For that we would have to strip requirements
down
 to a bare minimum required for software config. One thing that comes to
my
 mind is the cirros software config example [2] that Steven Hardy created.
 It is admittedly no up to what one could do with an image built according
 to [1] but on the other hand is really slick, whereas [1] installs a
whole
 set of things into the image (some of which do not really seem to be
needed
 for software config).


 Building an image from git repos was the best chance of having a
 single set of instructions which works for most cases, since the
 tools were not packaged for debian derived distros. This seems to be
 improving though; the whole build stack is now packaged for Debian
 Unstable, Testing and also Ubuntu Utopic (which isn't released yet).
 Another option is switching the default instructions to installing
 from pip rather than git, but that still gets into distro-specific
 quirks which complicate the instructions. Until these packages are
 on the recent releases of common distros then we'll be stuck in this
 slightly awkward situation.

Yeah, I understand that the current situation is probably there because we
are so close to the point where the features get developed. So hopefully
this will improve and stabilize in the future.


 I wrote a cloud-init boot script to install the agents from packages
 from a pristine Fedora 20 [3] and it seems like a reasonable
 approach for when building a custom image isn't practical. Somebody
 submitting the equivalent for Debian and Ubuntu would be most
 welcome. We need to decide whether *everything* should be packaged
 or if some things can be delivered by cloud-init on boot (os-
 collect-config.conf template, 55-heat-config, the actual desired
 config hook...)

Thanks for the pointer. I'll have a look. I think if we can put as little
requirements on the base image and do as much as possible at boot, that
would be good. If help is needed for getting this done for other distros
(and for Windows) we can certainly work on this. We just have to agree and
be convinced that this is the right path.


 I'm all for there being documentation for the different ways of
 getting the agent and hooks onto a running server for a given
 distro. I think the hot-guide would be the best place to do that,
 and I've been making a start on that recently [4][5] (help
 welcome!). The README in [1] should eventually refer to the hot-
 guide once it is published so we're not maintaining multiple build
 instructions.

I'll have a look at all the pointers. Agree that this is extremely useful.

BTW: the unit testing work you started on the software config hooks will
definitely help as well!


 Another issue that comes to mind: what about operating systems not
 supported by diskimage-builder (Windows), or other hypervisor platforms?

 The Cloudbase folk have contributed some useful cloudbase-init
 templates this cycle [6], so that is a start.  I think there is
 interest in porting os-*-config to Windows as the way of enabling
 deployment resources (help welcome!).

Yes, I've seen those templates. As long as there is an image that work with
them, this is great. I have to look closer into the Windows things.


 Any, not really suggestions from my side but more observations and
 thoughts. I wanted to share those and raise some 

[openstack-dev] [Heat] image requirements for Heat software config

2014-10-14 Thread Thomas Spatzier

Hi all,

I have been experimenting a lot with Heat software config to  check out
what works today, and to think about potential next steps.
I've also worked on an internal project where we are leveraging software
config as of the Icehouse release.

I think what we can do now from a user's perspective in a HOT template is
really nice and resonates well also with customers I've talked to.
One of the points where we are constantly having issues, and also got some
push back from customers, are the requirements on the in-instance tools and
the process of building base images.
One observation is that building a base image with all the right stuff
inside sometimes is a brittle process; the other point is that a lot of
customers do not like a lot of requirements on their base images. They want
to maintain one set of corporate base images, with as little modification
on top as possible.

Regarding the process of building base images, the currently documented way
[1] of using diskimage-builder turns out to be a bit unstable sometimes.
Not because diskimage-builder is unstable, but probably because it pulls in
components from a couple of sources:
#1 we have a dependency on implementation of the Heat engine of course (So
this is not pulled in to the image building process, but the dependency is
there)
#2 we depend on features in python-heatclient (and other python-* clients)
#3 we pull in implementation from the heat-templates repo
#4 we depend on tripleo-image-elements
#5 we depend on os-collect-config, os-refresh-config and os-apply-config
#6 we depend on diskimage-builder itself

Heat itself and python-heatclient are reasonably well in synch because
there is a release process for both, so we can tell users with some
certainty that a feature will work with release X of OpenStack and Heat and
version x.z.y of python-heatclient. For the other 4 sources, success
sometimes depends on the time of day when you try to build an image
(depending on what changes are currently included in each repo). So
basically there does not seem to be a consolidated release process across
all that is currently needed for software config.

The ideal solution would be to have one self-contained package that is easy
to install on various distributions (an rpm, deb, MSI ...).
Secondly, it would be ideal to not have to bake additional things into the
image but doing bootstrapping during instance creation based on an existing
cloud-init enabled image. For that we would have to strip requirements down
to a bare minimum required for software config. One thing that comes to my
mind is the cirros software config example [2] that Steven Hardy created.
It is admittedly no up to what one could do with an image built according
to [1] but on the other hand is really slick, whereas [1] installs a whole
set of things into the image (some of which do not really seem to be needed
for software config).

Another issue that comes to mind: what about operating systems not
supported by diskimage-builder (Windows), or other hypervisor platforms?

Any, not really suggestions from my side but more observations and
thoughts. I wanted to share those and raise some discussion on possible
options.

Regards,
Thomas

[1]
https://github.com/openstack/heat-templates/blob/master/hot/software-config/elements/README.rst
[2]
https://github.com/openstack/heat-templates/tree/master/hot/software-config/example-templates/cirros-example


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Request for python-heatclient project to adopt heat-translator

2014-09-23 Thread Thomas Spatzier
Excerpts from Steven Hardy's message on 23/09/2014 11:37:42:
snip
 
  On the other hand, there is a big downside to having it (only) in Heat
also
  - you're dependent on the operator deciding to provide it.
 
  - You prempt the decision about integration with any higher
levelservices,
 e.g Mistral, Murano, Solum, if you bake in the translator at the
 heat level.
 
  Not sure I understand this one.

 I meant if non-simple TOSCA was in scope, would it make sense to bake the
 translation in at the heat level, when there are aspects of the DSL which
 we will never support (but some higher layer might).

 Given Sahdev's response saying simple-profile is all that is currently in
 scope, it's probably a non-issue, I just wanted to clarify if heat was
the
 right place for this translation.

Yes, so definitely so far the scope if TOSCA simple profile which aims to
be easily mappable to Heat/HOT. Therefore, close integration makes sense
IMO.

Even if the TOSCA community focuses more on features not covered by Heat
(e.g. the plans or workflows which would rather map to Mistral), the
current decision would not preempt such movements. An overall TOSCA
description is basically modular where parts like the declarative topology
could be handed to one layer - this is what we are talking about now - and
other parts (like the flows) could go to a different layer.
So if we get there in the future, this decomposer functionality could be
addressed on-top of a layer we are working on today.

Hope this helps and does not add confusion :-)

Regards,
Thomas


 Steve

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] naming of provider template for docs

2014-09-19 Thread Thomas Spatzier
 From: Mike Spreitzer mspre...@us.ibm.com
 To: OpenStack Development Mailing List \(not for usage questions\)
 openstack-dev@lists.openstack.org
 Date: 19/09/2014 07:15
 Subject: Re: [openstack-dev] [Heat] naming of provider template for docs

 Angus Salkeld asalk...@mirantis.com wrote on 09/18/2014 09:33:56 PM:

  Hi

  I am trying to add some docs to openstack-manuals hot_guide about
  using provider templates : https://review.openstack.org/#/c/121741/

  Mike has suggested we use a different term, he thinks provider is
  confusing.
  I agree that at the minimum, it is not very descriptive.

  Mike has suggested nested stack, I personally think this means
 something a
  bit more general to many of us (it includes the concept of aws
 stacks) and may
  I suggest template resource - note this is even the class name for
  this exact functionality.
 
  Thoughts?

  Option 1) stay as is provider templates
  Option 2) nested stack
  Option 3) template resource

Out of those 3 I like #3 the most, even though not perfect as Mike
discussed below.


 Thanks for rising to the documentation challenge and trying to get
 good terminology.

 I think your intent is to describe a category of resources, so your
 option 3 is superior to option 1 --- the thing being described is
 not a template, it is a resource (made from a template).

 I think

 Option 4) custom resource

That one sound too generic to me, since also custom python based resource
plugins are custom resources.


 would be even better.  My problem with template resource is that,
 to someone who does not already know what it means, this looks like
 it might be a kind of resource that is a template (e.g., for
 consumption by some other resource that does something with a
 template), rather than itself being something made from a template.
 If you want to follow this direction to something perfectly clear,
 you might try templated resource (which is a little better) or
 template-based resource (which I think is pretty clear but a bit
 wordy) --- but an AWS::CloudFormation::Stack is also based on a
 template.  I think that if you try for a name that really says all
 of the critical parts of the idea, you will get something that is
 too wordy and/or awkward.  It is true that custom resource begs
 the question of how the user accomplishes her customization, but at
 least now we have the reader asking the right question instead of
 being misled.

I think template-based resource really captures the concept best. And it
is not too wordy IMO.
If it helps to explain the concept intuitively, I would be in favor of it.

Regards,
Thomas


 I agree that nested stack is a more general concept.  It describes
 the net effect, which the things we are naming have in common with
 AWS::CloudFormation::Stack.  I think it would make sense for our
 documentation to say something like both an
 AWS::CloudFormation::Stack and a custom resource are ways to specify
 a nested stack.

 Thanks,
 Mike ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [HEAT] Qestions on adding a new Software Config element for Opscode Chef

2014-08-02 Thread Thomas Spatzier
 From: Tao Tao t...@us.ibm.com
 To: openstack-dev@lists.openstack.org
 Cc: Shu Tao shu...@us.ibm.com, Yan Yan YY Hu
 yanya...@cn.ibm.com, Bo B Yang yang...@cn.ibm.com
 Date: 01/08/2014 22:19
 Subject: [openstack-dev] [HEAT] Qestions on adding a new Software
 Config element for Opscode Chef

 Hi, All:

 We are trying to leverage Heat software config model to support
 Chef-based software installation. Currently the chef-based software
 config is not in place with Heat version 0.2.9.

 Therefore, we do have a number of questions on the implementation
byourselves:

 1. Should we create new software config child resource types (e.g.
 OS::Heat::SoftwareConfig::Chef and
 OS::Heat::SoftwareDeployment::Chef proposed in the https://
 wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-spec) or
 should we reuse the existing software config resource type (e.g.
 OS::Heat::SoftwareConfig by leveraging group attribute) like the

Right, you should not implement your own resource plugin, but use the
current SoftwareConfig resource and specify 'chef' in the group attribute.

You will have to build an image that contains the right in-instance tools
as described here:
https://github.com/openstack/heat-templates/blob/master/hot/software-config/elements/README.rst

Note that the in-instance hook for handling chef-solo is still in review
and has not yet been merged to the heat-templates repository. Before
building you image, you can pull the changes for the chef config hook from
this review:

https://review.openstack.org/#/c/80229/

 following example with Puppet? What are the pros and cons with
 either approach?

   config:
 type: OS::Heat::SoftwareConfig
 properties:
   group: puppet
   inputs:
   - name: foo
   - name: bar
   outputs:
   - name: result
   config:
 get_file: config-scripts/example-puppet-manifest.pp

   deployment:
 type: OS::Heat::SoftwareDeployment
 properties:
   config:
 get_resource: config
   server:
 get_resource: server
   input_values:
 foo: fo
 bar: ba

 2. Regarding OpsCode Chef and Heat integration, should our software
 config support chef-solo only, or should support Chef server? In
 another word,  should we let Heat to do the orchestration for the
 chef-based software install or should we continue to use chef-server
 for the chef-based software install?

I would say, with the current chef in-instance hook for Heat, you could use
chef-solo for the software install per package and per server and let Heat
do the overall orchestration.


 3. In the current implementation of software config hook for puppet
 as follows:

 heat-templates / hot / software-config / elements / heat-config-puppet /
 install.d / 50-heat-config-hook-puppet

 3.1 why we need a 50-* as a prefix for the heat-config hook name?

 3.2 In the script as follows, what is the install-packages script?
 where does it load puppet package? How would we change the script to
 install chef package?

 #!/bin/bash
 set -x

 SCRIPTDIR=$(dirname $0)

 install-packages puppet
 install -D -g root -o root -m 0755 ${SCRIPTDIR}/hook-puppet.py /var/
 lib/heat-config/hooks/puppet

 4. With diskimage-builder, we can build in images with many software
 config elements(chef, puppet, script, salt), which means there will
 be many hooks in the image.
  However, By reading the source code of the os-refresh-config, it
 seems it will execute only the hooks which has corresponding group
 defined in the software config, is that right?

Right, you have to have the group property in your SoftwareConfig resources
set appropriately.


 def invoke_hook(c, log):
 # sanitise the group to get an alphanumeric hook file name
 hook = .join(
 x for x in c['group'] if x == '-' or x == '_' or x.isalnum())
 hook_path = os.path.join(HOOKS_DIR, hook)

 signal_data = None
 if not os.path.exists(hook_path):
 log.warn('Skipping group %s with no hook script %s' % (
 c['group'], hook_path))
 else:


 Thanks a lot for your kind assistance!



 Thanks,
 Tao Tao, Ph.D.
 IBM T. J. Watson Research Center
 1101 Kitchawan Road
 Yorktown Heights, NY 10598
 Phone: (914) 945-4541
 Email: t...@us.ibm.com___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] heat stack-create with two vm instances always got one failed

2014-07-16 Thread Thomas Spatzier
I think the problem could be that you need one network port for each
server, but you just have one OS::Neutron::Port resource defined.

yulin...@dell.com wrote on 16/07/2014 08:17:00:
 From: yulin...@dell.com
 To: openstack-dev@lists.openstack.org
 Date: 16/07/2014 08:20
 Subject: [openstack-dev] heat stack-create with two vm instances
 always got one failed

 Dell Customer Communication

 Hi,
 I'm using heat to create a stack with two instances. I always got
 one of them successful, but the other would fail. If I split the
 template into two and each of them contains one instance then it
 worked. However, I thought Heat template would allow multiple
 instances being created?

 Here I attach the heat template:
 {
 AWSTemplateFormatVersion : 2010-09-09,
 Description : Sample Heat template that spins up multiple
 instances and a private network (JSON),
 Resources : {
 test_net : {
  Type : OS::Neutron::Net,
  Properties : {
  name : test_net
   }
   },
   test_subnet : {
   Type : OS::Neutron::Subnet,
   Properties : {
   name : test_subnet,
   cidr : 120.10.9.0/24,
   enable_dhcp : True,
   gateway_ip : 120.10.9.1,
   network_id : { Ref : test_net }
   }
   },
  test_net_port : {
  Type : OS::Neutron::Port,
  Properties : {
  admin_state_up : True,
  network_id : { Ref : test_net }
  }
  },
  instance1 : {
  Type : OS::Nova::Server,
  Properties : {
  name : instance1,
  image : 8e2b4c71-448c-4313-8b41-b238af31f419,
  flavor: tvm-tt_lite,
  networks : [
  {port : { Ref : test_net_port }}
  ]
}
},
  instance2 : {
  Type : OS::Nova::Server,
  Properties : {
  name : instance2,
  image : 8e2b4c71-448c-4313-8b41-b238af31f419,
  flavor: tvm-tt_lite,
  networks : [
  {port : { Ref : test_net_port }}
  ]
}
}
 }
 }
 The error that I got from heat-engine.log is as follows:

 2014-07-16 01:49:50.514 25101 DEBUG heat.engine.scheduler [-] Task
 resource_action complete step /usr/lib/python2.6/site-packages/heat/
 engine/scheduler.py:170
 2014-07-16 01:49:50.515 25101 DEBUG heat.engine.scheduler [-] Task
 stack_task from Stack teststack sleeping _sleep /usr/lib/python2.
 6/site-packages/heat/engine/scheduler.py:108
 2014-07-16 01:49:51.516 25101 DEBUG heat.engine.scheduler [-] Task
 stack_task from Stack teststack running step /usr/lib/python2.6/
 site-packages/heat/engine/scheduler.py:164
 2014-07-16 01:49:51.516 25101 DEBUG heat.engine.scheduler [-] Task
 resource_action running step /usr/lib/python2.6/site-packages/heat/
 engine/scheduler.py:164
 2014-07-16 01:49:51.960 25101 DEBUG urllib3.connectionpool [-] GET
 /v2/b64803d759e04b999e616b786b407661/servers/7cb9459c-29b3-4a23-
 a52c-17d85fce0559 HTTP/1.1 200 1854 _make_request /usr/lib/python2.
 6/site-packages/urllib3/connectionpool.py:295
 2014-07-16 01:49:51.963 25101 ERROR heat.engine.resource [-] CREATE
 : Server instance1
 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource Traceback
 (most recent call last):
 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource   File /
 usr/lib/python2.6/site-packages/heat/engine/resource.py, line 371,
 in _do_action
 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource while
 not check(handle_data):
 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource   File /
 usr/lib/python2.6/site-packages/heat/engine/resources/server.py,
 line 239, in check_create_complete
 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource return
 self._check_active(server)
 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource   File /
 usr/lib/python2.6/site-packages/heat/engine/resources/server.py,
 line 255, in _check_active
 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource raise exc
 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource Error:
 Creation of server instance1 failed.
 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource
 2014-07-16 01:49:51.996 25101 DEBUG heat.engine.scheduler [-] Task
 resource_action cancelled cancel /usr/lib/python2.6/site-packages/
 heat/engine/scheduler.py:187
 2014-07-16 01:49:52.004 25101 DEBUG heat.engine.scheduler [-] Task
 stack_task from Stack teststack complete step /usr/lib/python2.6/
 site-packages/heat/engine/scheduler.py:170
 2014-07-16 01:49:52.005 25101 WARNING heat.engine.service [-] Stack
 create failed, status FAILED
 2014-07-16 01:50:29.218 25101 DEBUG heat.openstack.common.rpc.amqp
 [-] received {u'_context_roles': [u'Member', u'admin'], u'_msg_id':
 u'9aedf86fda304cfc857dc897d8393427', u'_context_password':
 'SANITIZED', 

Re: [openstack-dev] [Heat] Nova-network support

2014-07-14 Thread Thomas Spatzier
 From: Pavlo Shchelokovskyy pshchelokovs...@mirantis.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 14/07/2014 16:42
 Subject: [openstack-dev] [Heat] Nova-network support

 Hi Heaters,

 I would like to start a discussion about Heat and nova-network. As
 far as I understand nova-network is here to stay for at least 2 more
 releases [1] and, even more, might be left indefinitely as a viable
 simple deployment option supported by OpenStack (if anyone has a
 more recent update on nova-network deprecation status please call me
 out on that).

 In light of this I think we should improve our support of nova-
 network-based OpenStack in Heat. There are several topics that
 warrant attention:

 1) As python-neutronclient is already set as a dependency of heat
 package, we need a unified way for Heat to understand what network
 service the OpenStack cloud uses that does not depend on presence or
 absence of neutronclient. Several resources already need this (e.g.
 AWS::EC2::SecurityGroup that currently decides on whether to use
 Neutron or Nova-network only by a presence of VPC_ID property in the
 template). This check might be a config option but IMO this could be
 auto-discovered on heat-engine start. Also, when current Heat is
 deployed on nova-network-based OpenStack, OS::Neutron::* resources
 are still being registered and shown with heat resource-type-list
 (at least on DevStack that is) although clearly they can not be
 used. A network backend check would then allow to disable those
 Neutron resources for such deployment. (On a side note, such checks
 might also be created for resources of other integrated but not
 bare-minimum essential OpenStack components such as Trove and Swift.)

 2) We need more native nova-network specific resources. For example,
 to use security groups on nova-network now one is forced to use
 AWS::EC2::SecurityGroup, that looks odd when used among other
 OpenStack native resources and has its own limitations as its
 implementation must stay compatible with AWS. Currently it seems we
 are also missing native nova-network Network, Cloudpipe VPN, DNS
 domains and entries (though I am not sure how admin-specific those are).

 If we agree that such improvements make sense, I will gladly put
 myself to implement these changes.

I think  those improvements do make sense, since neutron cannot be taken as
a given in each environment.

Ideally, we would actually have a resource model that abstract from the
underlying implementation, i.e. do not call out neutron or nova-net but
just talk about something like a FloatingIP which then gets implemented by
a neutron or nova-net backend. Currently, binding to either option has to
be explicitly defined in templates, so in the worst case one might end up
with two complete separate definitions of the same thing.
That said, I know that it will probably be hard to come up with an
abstraction for everything. I also know that provider templates could also
partly solve the problem today, but many users probably do not know how to
apply them.
Some level of abstraction could also help to make some changes in
underlying API transparent to templates.
Anyway, I wanted to throw out the idea of some level of abstraction and see
what the reactions are.

Regards,
Thomas


 Best regards,
 Pavlo Shchelokovskyy.

 [1] http://lists.openstack.org/pipermail/openstack-dev/2014-January/
 025824.html___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Update behavior for CFN compatible resources

2014-07-07 Thread Thomas Spatzier
 From: Steven Hardy sha...@redhat.com
 To: openstack-dev@lists.openstack.org
 Date: 07/07/2014 10:39
 Subject: [openstack-dev] [heat] Update behavior for CFN compatible
resources

 Hi all,

 Recently I've been adding review comments, and having IRC discussions
about
 changes to update behavior for CloudFormation compatible resources.

 In several cases, folks have proposed patches which allow non-destructive
 update of properties which are not allowed on AWS (e.g which would result
 in destruction of the resource were you to run the same template on CFN).

 Here's an example:

 https://review.openstack.org/#/c/98042/

 Unfortunately, I've not spotted all of these patches, and some have been
 merged, e.g:

 https://review.openstack.org/#/c/80209/

 Some folks have been arguing that this minor deviation from the AWS
 documented behavior is OK.  My argument is that is definitely is not,
 because if anyone who cares about heat-CFN portability develops a
template
 on heat, then runs it on CFN a non-destructive update suddenly becomes
 destructive, which is a bad surprise IMO.

+1


 I think folks who want the more flexible update behavior should simply
use
 the native resources instead, and that we should focus on aligning the
CFN
 compatible resources as closely as possible with the actual behavior on
 CFN.

+1 on that as well


 What are peoples thoughts on this?

 My request, unless others strongly disagree, is:

 - Contributors, please check the CFN docs before starting a patch
   modifying update for CFN compatible resources
 - heat-core, please check the docs and don't approve patches which make
   heat behavior diverge from that documented for CFN.

 The AWS docs are pretty clear about update behavior, they can be found
 here:

 http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-
 template-resource-type-ref.html

 The other problem, if we agree that aligning update behavior is
desirable,
 is what we do regarding deprecation for existing diverged update
behavior?

 Steve

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Sergey Kraynev for heat-core

2014-06-27 Thread Thomas Spatzier
 From: Steve Baker sba...@redhat.com
 To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org
 Date: 27/06/2014 00:10
 Subject: [openstack-dev] [heat] Sergey Kraynev for heat-core

 I'd like to nominate Sergey Kraynev for heat-core. His reviews are
 valuable and prolific, and his commits have shown a sound understanding
 of heat internals.


+1


 http://stackalytics.com/report/contribution/heat-group/60

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] [Heat] Mid-cycle collaborative meetup

2014-05-30 Thread Thomas Spatzier
Excerpt from Zane Bitter's message on 29/05/2014 20:57:10:

 From: Zane Bitter zbit...@redhat.com
 To: openstack-dev@lists.openstack.org
 Date: 29/05/2014 20:59
 Subject: Re: [openstack-dev] [TripleO] [Ironic] [Heat] Mid-cycle
 collaborative meetup
snip
 BTW one timing option I haven't seen mentioned is to follow Pycon-AU's
 model of running e.g. Friday-Tuesday (July 25-29). I know nobody wants
 to be stuck in Raleigh, NC on a weekend (I've lived there, I understand
 ;), but for folks who have a long ways to travel it's one weekend lost
 instead of two.

+1 - excellent idea!


 cheers,
 Zane.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Heat SoftwareConfig and SoftwareDevelopment

2014-05-15 Thread Thomas Spatzier
Excerpts from Lingxian Kong's message on 15/05/2014 22:51:33:
 From: Lingxian Kong anlin.k...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 15/05/2014 22:53
 Subject: Re: [openstack-dev] Heat SoftwareConfig and SoftwareDevelopment

snip

 Do you know any information about examples or tutorials when using
 SoftwareConfig and SoftwareDeployment? I found it's a little difficult
 to understand for me.

Unfortunately, we do not have any good documentation (or any documentation
at all) on this right now. I was planning to start some, but did not get to
it so far. For now, you could look at some examples in the heat-templates
repository at

https://github.com/openstack/heat-templates/tree/master/hot/software-config/example-templates

Especially in

https://github.com/openstack/heat-templates/tree/master/hot/software-config/example-templates/wordpress

you can see the WordPress example that has been widely used in Heat
samples, but making use of SoftwareConfig and SoftwareDeployment.

Regards,
Thomas


 I know there are already some wikis, but not sufficient for me, :(

 Thanks!

 --
 -
 Lingxian Kong___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Design summit preparation - Next steps for Heat Software Orchestration

2014-05-08 Thread Thomas Spatzier
Hi Steve,

I have added something to the design summit etherpad at [1] based on this
ML discussion so far. I removed some items from my initial post since they
seem to be resolved. I copied more concrete points from this thread into
other items. Please have a look and edit as needed.

[1] https://etherpad.openstack.org/p/juno-summit-heat-sw-orch

Regards,
Thomas

Steve Baker sba...@redhat.com wrote on 28/04/2014 23:31:56:

 From: Steve Baker sba...@redhat.com
 To: openstack-dev@lists.openstack.org
 Date: 28/04/2014 23:32
 Subject: Re: [openstack-dev] [Heat] Design summit preparation - Next
 steps for Heat Software Orchestration

 On 29/04/14 01:41, Thomas Spatzier wrote:
  Excerpts from Steve Baker's message on 28/04/2014 01:25:29:
 
  I'm with Clint on this one. Heat-engine cannot know the true state
  of a server just by monitoring what has been polled and signaled.
  Since it can't know it would be dangerous for it to guess. Instead
  it should just offer all known configuration data to the server and
  allow the server to make the decision whether to execute a config
  again. I still think one more derived input value would be useful to
  help the server to make that decision. This could either be a
  datestamp for when the derived config was created, or a hash of all
  of the derived config data.
  So as I said in another note, I agree that the this seems best handled
in
  the in-instance tool and the Heat engine, or the resource should
probably
  not have any new magic. If there is some additional state property that
the
  resource maintains, and the in-instance tool handles it, that should be
  fine. I think what is important, is that users who want to use existing
  automation scripts do not have to implement much logic for interpreting
  that additional flag, but that we handle it in the generic hook
  invocation logic.
 
  Can you elaborate more on what you have in mind with the additional
derived
  input value?
 
 Heat needs to give the hook or the config script enough information to
 know whether that *particular* combination of config script + input
 values has been executed on that server. It could do this by providing
 the timestamp or the hash of the derived config, then this piece of
 information can be compared with some local state on the server to
 decide whether to run the config again. Actually the hash could be
 calculated on the server too, so the hash could be calculated in
 55-heat-config then consumed by the hook or config script.
 
  For this design session I have my own list of items to discuss:
  #4.1 Maturing the puppet hook so it can invoke more existing puppet
  scripts
  #4.2 Make progress on the chef hook, and defining the mapping from
  chef concepts to heat config/inputs/outputs
  #4.3 Finding volunteers to write hooks for Salt, Ansible
  #5.1 Now that heatclient can include binary files, discuss enhancing
  get_file to zip the directory contents if it is pointed at a directory
  #5.2 Now that heatclient can include binary files, discuss making
  stack create/update API calls multipart/form-data so that proper
  mime data can be captured for attached files
  #6.1 Discuss options for where else metadata could be polled from (ie,
  swift)
  #6.2 Discuss whether #6.1 can lead to software-config that can work
  on an OpenStack which doesn't allow admin users or keystone domains
  (ie, rackspace)
  #4.1 thru #4.3 are important and seem straight forward and more about
  finding people to do it. If there are design issues to be figured out,
  maybe we can do it offline via the ML.
 
  #5.1 and #5.2 are really interesting and map to use cases we have also
seen
  internally. Is there a size limit for the binaries? Would this also
cover,
  e.g. sending small binaries like a wordpress install tgz instead of
doing a
  yum based install? Or would the latter be something to address via #6
  below?
 
  #6 looks very interesting as well. We also thought about using swift
not
  only for metadata but also for sharing installables to instances in
cases
  where direct download from the internet, for example, is not possible.
 We'll just have to try it to find out were the limits are, but in
 general I would assume the following:
 * user_data limited to about 16k total, so anything bigger than that
 needs to go in the deployment input_values
 * practically speaking, a binary could go in a deployment input value or
 a swift object resource (which doesn't exist yet) to be passed to the
 deployment input value by url
 * The default heat.conf max_template_size=524288, so to avoid this limit
 binaries should be put into swift outside the scope of heat, and passed
 into the template as a parameter URL.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http

Re: [openstack-dev] [Heat] Special session on heat-translator project at Atlanta summit

2014-05-08 Thread Thomas Spatzier
Hi all,

we have put up an etherpad [1] with a draft agenda for the session on Heat
Translator next week. It is an outline of things that we want to talk
about to give an overview and some background on the project, as well as a
collection of things that came to our minds.

It is important to say that the project is really young and still taking
shape. Compared to other design design sessions where the issues being
discussed are much more concrete, expect this session to be of a broader
scope. It is really meant to show what we have so far, what our vision is,
but then also to shape the project so it best benefits the OpenStack
Orchestration program. Therefore, we are really interested in everyone's
input for that session so that we can make best use of everyone's time
there.

That said, feel free to add your input to the etherpad. We will then
consolidate it in time for the session next Thursday.

[1] https://etherpad.openstack.org/p/juno-design-summit-heat-translator

Regards,
Thomas

 From: Thomas Spatzier/Germany/IBM@IBMDE
 To: OpenStack Development Mailing List \(not for usage questions\)
 openstack-dev@lists.openstack.org
 Date: 06/05/2014 10:47
 Subject: Re: [openstack-dev] [Heat] Special session on heat-
 translator project at Atlanta summit

 Hi Dimitri,

 I think we will cover the relation to the overall Heat vision during that
 session. That would actually be very good use of time during that
session,
 because we want to shape the project in a way that it adds benefit to the
 overall orchestration project. From our perspective, TOSCA is one driving
 factor, but it's not a pure TOSCA discussion.
 For example, during last year's summit in Portland we came up with this
 vision architecture chart in [1], and heat-translator could be a
starting
 point for this pluggable formats layer. Maybe we will also incubate some
 features in heat-translator and later move them more tightly into
Heat ...
 all points to be sorted out and to develop over time.

 My 2 cents.

 Thomas

 Dimitri Mazmanov dimitri.mazma...@ericsson.com wrote on 06/05/2014
 10:16:44:

  From: Dimitri Mazmanov dimitri.mazma...@ericsson.com
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org
  Date: 06/05/2014 10:18
  Subject: Re: [openstack-dev] [Heat] Special session on heat-
  translator project at Atlanta summit
 
  Great effort! One question, will this session relate to the overall
  Heat vision [1] (Model Interpreters, API Relay, etc)?
 
  -
  Dimitri
 
  [1] https://wiki.openstack.org/wiki/Heat/Vision
 
  On 05/05/14 19:21, Thomas Spatzier thomas.spatz...@de.ibm.com
wrote:
 
  Hi all,
 
  I mentioned in some earlier mail that we have started to implement a
 TOSCA
  YAML to HOT translator on stackforge as project heat-translator. We
 have
  been lucky to get a session allocated in the context of the Open
source
 @
  OpenStack program for the Atlanta summit, so I wanted to share this
with
  the Heat community to hopefully attract some interested people. Here is
 the
  session link:
 
  http://openstacksummitmay2014atlanta.sched.org/event/
  c94698b4ea2287eccff8fb743a358d8c#.U2e-zl6cuVg
 
  While there is some focus on TOSCA, the goal of discussions would also
be
  to find a reasonable design for sitting such a translation layer on-top
 of
  Heat, but also identify the relations and benefits for other projects,
 e.g.
  how Murano use cases that include workflows for templates (which is
part
 of
  TOSCA) could be addressed long term. So we hope to see a lot of
 interested
  folks there!
 
  Regards,
  Thomas
 
  PS: Here is a more detailed description of the session that we
submitted:
 
  1) Project Name:
  heat-translator
 
  2) Describe your project, including links to relevent sites,
 repositories,
  bug trackers and documentation:
  We have recently started a stackforge project [1] with the goal to
enable
  the deployment of templates defined in standard format such as OASIS
 TOSCA
  on top of OpenStack Heat. The Heat community has been implementing a
 native
  template format 'HOT' (Heat Orchestration Templates) during the Havana
 and
  Icehouse cycles, but it is recognized that support for other standard
  formats that are sufficiently aligned with HOT are also desirable to be
  supported.
  Therefore, the goal of the heat-translator project is to enable such
  support by translating such formats into Heat's native format and
thereby
  enable a deployment on Heat. Current focus is on OASIS TOSCA. In fact,
 the
  OASIS TOSCA TC is currently working on a TOSCA Simple Profile in YAML
[2]
  which has been greatly inspired by discussions with the Heat team, to
 help
  getting TOSCA adoption in the community. The TOSCA TC and the Heat team
  have also be in close discussion to keep HOT and TOSCA YAML aligned.
 Thus,
  the first goal of heat-translator will be to enable deployment of TOSCA
  YAML templates thru Heat.
  Development had been started in a separate public github repository

Re: [openstack-dev] [Heat] Special session on heat-translator project at Atlanta summit

2014-05-06 Thread Thomas Spatzier
Hi Dimitri,

I think we will cover the relation to the overall Heat vision during that
session. That would actually be very good use of time during that session,
because we want to shape the project in a way that it adds benefit to the
overall orchestration project. From our perspective, TOSCA is one driving
factor, but it's not a pure TOSCA discussion.
For example, during last year's summit in Portland we came up with this
vision architecture chart in [1], and heat-translator could be a starting
point for this pluggable formats layer. Maybe we will also incubate some
features in heat-translator and later move them more tightly into Heat ...
all points to be sorted out and to develop over time.

My 2 cents.

Thomas

Dimitri Mazmanov dimitri.mazma...@ericsson.com wrote on 06/05/2014
10:16:44:

 From: Dimitri Mazmanov dimitri.mazma...@ericsson.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 06/05/2014 10:18
 Subject: Re: [openstack-dev] [Heat] Special session on heat-
 translator project at Atlanta summit

 Great effort! One question, will this session relate to the overall
 Heat vision [1] (Model Interpreters, API Relay, etc)?

 -
 Dimitri

 [1] https://wiki.openstack.org/wiki/Heat/Vision

 On 05/05/14 19:21, Thomas Spatzier thomas.spatz...@de.ibm.com wrote:

 Hi all,

 I mentioned in some earlier mail that we have started to implement a
TOSCA
 YAML to HOT translator on stackforge as project heat-translator. We
have
 been lucky to get a session allocated in the context of the Open source
@
 OpenStack program for the Atlanta summit, so I wanted to share this with
 the Heat community to hopefully attract some interested people. Here is
the
 session link:

 http://openstacksummitmay2014atlanta.sched.org/event/
 c94698b4ea2287eccff8fb743a358d8c#.U2e-zl6cuVg

 While there is some focus on TOSCA, the goal of discussions would also be
 to find a reasonable design for sitting such a translation layer on-top
of
 Heat, but also identify the relations and benefits for other projects,
e.g.
 how Murano use cases that include workflows for templates (which is part
of
 TOSCA) could be addressed long term. So we hope to see a lot of
interested
 folks there!

 Regards,
 Thomas

 PS: Here is a more detailed description of the session that we submitted:

 1) Project Name:
 heat-translator

 2) Describe your project, including links to relevent sites,
repositories,
 bug trackers and documentation:
 We have recently started a stackforge project [1] with the goal to enable
 the deployment of templates defined in standard format such as OASIS
TOSCA
 on top of OpenStack Heat. The Heat community has been implementing a
native
 template format 'HOT' (Heat Orchestration Templates) during the Havana
and
 Icehouse cycles, but it is recognized that support for other standard
 formats that are sufficiently aligned with HOT are also desirable to be
 supported.
 Therefore, the goal of the heat-translator project is to enable such
 support by translating such formats into Heat's native format and thereby
 enable a deployment on Heat. Current focus is on OASIS TOSCA. In fact,
the
 OASIS TOSCA TC is currently working on a TOSCA Simple Profile in YAML [2]
 which has been greatly inspired by discussions with the Heat team, to
help
 getting TOSCA adoption in the community. The TOSCA TC and the Heat team
 have also be in close discussion to keep HOT and TOSCA YAML aligned.
Thus,
 the first goal of heat-translator will be to enable deployment of TOSCA
 YAML templates thru Heat.
 Development had been started in a separate public github repository [3]
 earlier this year, but we are currently in the process of moving all code
 to the stackforge projects

 [1] https://github.com/stackforge/heat-translator
 [2]
 https://www.oasis-open.org/committees/document.php?
 document_id=52571wg_abbrev=tosca
 [3] https://github.com/spzala/heat-translator

 3) Please describe how your project relates to OpenStack:
 Heat has been working on a native template format HOT to replace the
 original CloudFormation format as the primary template of the core Heat
 engine. CloudFormation shall continue to be supported as one possible
 format (to protect existing content), but it is desired to move such
 support out of the core engine into a translation layer. This is one
 architectural move that can be supported by the heat-translator project.
 Furthermore, there is desire to enable standardized formats such OASIS
 TOSCA to run on Heat, which will also be possible thru heat-translator.

 In addition, recent discussions [4] in the large OpenStack orchestration
 community have shown that several groups (e.g. Murano) are looking at
 extending orchestration capabilities beyond Heat functionality, and in
the
 course of doing this also extend current template formats. It has been
 suggested in mailing list posts that TOSCA could be one potential format
to
 center such discussions around instead of several groups

[openstack-dev] [Heat] Special session on heat-translator project at Atlanta summit

2014-05-05 Thread Thomas Spatzier

Hi all,

I mentioned in some earlier mail that we have started to implement a TOSCA
YAML to HOT translator on stackforge as project heat-translator. We have
been lucky to get a session allocated in the context of the Open source @
OpenStack program for the Atlanta summit, so I wanted to share this with
the Heat community to hopefully attract some interested people. Here is the
session link:

http://openstacksummitmay2014atlanta.sched.org/event/c94698b4ea2287eccff8fb743a358d8c#.U2e-zl6cuVg

While there is some focus on TOSCA, the goal of discussions would also be
to find a reasonable design for sitting such a translation layer on-top of
Heat, but also identify the relations and benefits for other projects, e.g.
how Murano use cases that include workflows for templates (which is part of
TOSCA) could be addressed long term. So we hope to see a lot of interested
folks there!

Regards,
Thomas

PS: Here is a more detailed description of the session that we submitted:

1) Project Name:
heat-translator

2) Describe your project, including links to relevent sites, repositories,
bug trackers and documentation:
We have recently started a stackforge project [1] with the goal to enable
the deployment of templates defined in standard format such as OASIS TOSCA
on top of OpenStack Heat. The Heat community has been implementing a native
template format 'HOT' (Heat Orchestration Templates) during the Havana and
Icehouse cycles, but it is recognized that support for other standard
formats that are sufficiently aligned with HOT are also desirable to be
supported.
Therefore, the goal of the heat-translator project is to enable such
support by translating such formats into Heat's native format and thereby
enable a deployment on Heat. Current focus is on OASIS TOSCA. In fact, the
OASIS TOSCA TC is currently working on a TOSCA Simple Profile in YAML [2]
which has been greatly inspired by discussions with the Heat team, to help
getting TOSCA adoption in the community. The TOSCA TC and the Heat team
have also be in close discussion to keep HOT and TOSCA YAML aligned. Thus,
the first goal of heat-translator will be to enable deployment of TOSCA
YAML templates thru Heat.
Development had been started in a separate public github repository [3]
earlier this year, but we are currently in the process of moving all code
to the stackforge projects

[1] https://github.com/stackforge/heat-translator
[2]
https://www.oasis-open.org/committees/document.php?document_id=52571wg_abbrev=tosca
[3] https://github.com/spzala/heat-translator

3) Please describe how your project relates to OpenStack:
Heat has been working on a native template format HOT to replace the
original CloudFormation format as the primary template of the core Heat
engine. CloudFormation shall continue to be supported as one possible
format (to protect existing content), but it is desired to move such
support out of the core engine into a translation layer. This is one
architectural move that can be supported by the heat-translator project.
Furthermore, there is desire to enable standardized formats such OASIS
TOSCA to run on Heat, which will also be possible thru heat-translator.

In addition, recent discussions [4] in the large OpenStack orchestration
community have shown that several groups (e.g. Murano) are looking at
extending orchestration capabilities beyond Heat functionality, and in the
course of doing this also extend current template formats. It has been
suggested in mailing list posts that TOSCA could be one potential format to
center such discussions around instead of several groups developing their
own orchestration DSLs. The next version of TOSCA with its simple profile
in YAML is very open for input from the community, so there is a great
opportunity to shape the standard in a way to address use cases brought up
by the community. Willingness to join discussions with the TOSCA TC have
already been indicated by several companies contributing to OpenStack.
Therefore we think the heat-translator project can help to focus such
discussions.

[4]
http://lists.openstack.org/pipermail/openstack-dev/2014-March/028957.html

4) How do you plan to use the time and space?
Give attendees an overview of current developments of the TOSCA Simple
Profile in YAML and how we are aligning this with HOT.
Give a quick summary of current code.
Discuss next steps and long term direction of the heat-translator project:
alignment with Heat, parts that could move into Heat, parts that would stay
outside of Heat etc.
Collect use cases from other interested groups (e.g. Murano), and discuss
that as potential input for the project and also ongoing TOSCA standards
work.
Discuss if and how this project could help to address requirements of
different groups.
Discuss and agree on a design to (1) meet important requirements based on
those discussions, and (2) to best enable collaborative development with
the community.


___
OpenStack-dev mailing list

Re: [openstack-dev] [Heat] Design summit preparation - Next steps for Heat Software Orchestration

2014-04-28 Thread Thomas Spatzier
Excerpts from Steve Baker's message on 28/04/2014 01:25:29:

snip
 #1 Enable software components for full lifecycle:
snip
 So in a short, stripped-down version, SoftwareConfigs could look like

 my_sw_config:
   type: OS::Heat::SoftwareConfig
   properties:
 create_config: # the hook for software install
 suspend_config: # hook for suspend action
 resume_config: # hook for resume action
 delete_config: # hook for delete action

snip

 OS::Heat::SoftwareConfig itself needs to remain ignorant of heat
 lifecycle phases, since it is just a store of config.

Sure, I agree on that. SoftwareConfig is just a store of config that gets
used by another resource which then deals with Heat's lifecycle.
The thing I was proposing is actually not making it lifecycle aware, but it
allows the user to store respective config pieces to later be executed by a
software deployment at respective lifecycle steps.


 Currently there are 2 ways to build configs which are lifecycle aware:
 1. have a config/deployment pair, each with different deployment actions
 2. have a single config/deployment, and have the config script do
 conditional logic
on the derived input value deploy_action

 Option 2. seem reasonable for most cases, but having an option which
 maps better to TOSCA would be nice.

So option 2 sounds like the right thing to me. The only things is that I
would not want to put all logic into a large script with conditional
handling, but to allow breaking the script into parts and let the condition
handling be done by the framework. My snippet above would then just allow
for telling the deploy logic which script to call when.
Most of the real work would probably be done in the in-instance tool, so
the Heat resource would really just allow for storing data in a
well-defined structure.


 Clint's StructuredConfig example would get us most of the way there,
 but a dedicated config resource might be easier to use.

Right, and that's the core of my proposal: having a dedicated config
resource that is intuitive to use for template authors.

 The deployment resource could remain agnostic to the contents of this
 resource though. The right place to handle this on the deployment
 side would be in the orc script 55-heat-config, which could infer
 whether the config was a lifecycle config, then invoke the required
 config based on the value of deploy_action.

Fully agree on that. This should be the place to handle most of the work.
I think we are saying the same thing on this topic, so I am optimistic to
agree on a solution :-)



 #2 Enable add-hoc actions on software components:
snip

 Lets park this for now. Maybe one day heat templates will be used to
 represent workflow tasks, but this isn't directly related to software
config.

I think if we get to a good conclusion of #1, maybe this won't be a big
deal after all.
So yeah, maybe park it (but keep in the back of our heads) and look at it
again depending on what the result for #1 looks like.


snip
 #3.1 software deployment should run just once:
 A bug has been raised because with today's implementation it can happen
 that SoftwareDeployments get executed multiple times. There has been some
 discussion around this issue but no final conclusion. An average user
will
 however assume that his automation gets run only or exactly once. When
 using existing scripts, it would be an additional burden to require
 rewrites to cope with multiple invocations. Therefore, we should have a
 generic solution to the problem so that users do not have to deal with
this
 complex problem.

 I'm with Clint on this one. Heat-engine cannot know the true state
 of a server just by monitoring what has been polled and signaled.
 Since it can't know it would be dangerous for it to guess. Instead
 it should just offer all known configuration data to the server and
 allow the server to make the decision whether to execute a config
 again. I still think one more derived input value would be useful to
 help the server to make that decision. This could either be a
 datestamp for when the derived config was created, or a hash of all
 of the derived config data.

So as I said in another note, I agree that the this seems best handled in
the in-instance tool and the Heat engine, or the resource should probably
not have any new magic. If there is some additional state property that the
resource maintains, and the in-instance tool handles it, that should be
fine. I think what is important, is that users who want to use existing
automation scripts do not have to implement much logic for interpreting
that additional flag, but that we handle it in the generic hook
invocation logic.

Can you elaborate more on what you have in mind with the additional derived
input value?



 #3.2 dependency on heat-cfn-api:
 Some parts of current signaling still depend on the heat-cfn-api. While
 work seems underway to completely move to Heat native signaling, some
 cleanup to make sure this is used throughout the code.


Re: [openstack-dev] [Heat] Design summit preparation - Next steps for Heat Software Orchestration

2014-04-24 Thread Thomas Spatzier
Renat Akhmerov rakhme...@mirantis.com wrote on 24/04/2014 08:28:25:

 From: Renat Akhmerov rakhme...@mirantis.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 24/04/2014 08:29
 Subject: Re: [openstack-dev] [Heat] Design summit preparation - Next
 steps for Heat Software Orchestration


 On 24 Apr 2014, at 01:43, Ruslan Kamaldinov rkamaldi...@mirantis.com
wrote:

  On Tue, Apr 22, 2014 at 8:42 PM, Thomas Spatzier
  thomas.spatz...@de.ibm.com wrote:
  #2 Enable add-hoc actions on software components:

 Cool, coincidentally or not but we use term “ad-hoc action” too in
 Mistral :) This is the functionality we implemented a couple of
 weeks ago. Might be useful to sync up on that.

Ok, that sounds good. I was hoping we can move this discussion in a
direction where we can converge several of the recently ongoing
discussions :-) So if this makese sense for everybody and there is time,
let's see what can be covered in that session.

My personal view is that we should discuss what fits as a next step into
Heat's software orchestration (i.e. natural fit without scope bust), and do
it in a way that allows surrounding projects to interface nicely with it.


  From the implementation point of view and our previous discussions
 I figure out
  that Mistral can be a good fit for lifecycle hooks execution. Renat
(Mistral
  lead) added a topic [1] for Heat weekly meeting, I hope we can discuss
this
  today as part of that topic.

 Yes, Ruslan, I’ve been there and we've decided to move the
 discussion to ML. We’re now finalizing our current (which we still
 conventionally call PoC) development phase (preparing necessary
 materials) so once this is done it’ll be much easier to talk about
 opportunities for integration because everyone will be able to find
 out what exactly Mistral is.

 Thanks

 Renat Akhmerov
 @ Mirantis Inc.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Design summit preparation - Next steps for Heat Software Orchestration

2014-04-24 Thread Thomas Spatzier
Hi Clint,

thanks for the comments. Those are some really good points. I added some
more thoughts from my side below.

Exceprts from Clint Byrum's message on 24/04/2014 17:53:40:

snip
  So in a short, stripped-down version, SoftwareConfigs could look like
 
  my_sw_config:
type: OS::Heat::SoftwareConfig
properties:
  create_config: # the hook for software install
  suspend_config: # hook for suspend action
  resume_config: # hook for resume action
  delete_config: # hook for delete action
 

 First off, modeling more actions is definitely on the near-term need
 list for TripleO. We need to model rebuild and replace as an action too,
 so that we can evacuate/migrate work loads off of compute nodes.

 I think you can prototype action handling already with StructuredConfig,
 which allows you to define your own structure underneath config:.
 So you'd just simply do this:

 my_sw_config:
   type: OS::Heat::StructuredConfig
   properties:
 config:
   create_config: |
 #!/bin/bash
 execute_stuff
   delete_config: |
 #!/bin/bash
 execute_delete_stuff

Interesting. I had not thought of StructuredConfig in that way yet. So if
the in-instance tool would be implemented in a way that when the
corresponding StructuredDeployment resource is in CREATE_IN_PROGRESS it
executes the create_config, and when it is in DELETE_IN_PROGRESS executes
the delete_config (does the tag-teaming between StructuredDeployment and
the in-instance tool do this today).
I would still slightly prefer a more explicit and top-level definition of
well-defined config hooks will well-defined semantics (i.e. at which
lifecycle step they get executed), though, as opposed to a free-form map.


 Then your in-instance tools would simply write these as executables and
 run them when the action dictates. Suspend and resume aren't currently
 exposed, but I think that would be a fairly trivial patch to enable.

 Note that IMO we should _not_ encourage users to embed executables in
 templates. In addition to muddying the waters between orchestration
 and configuration management, it is also a very poor model for long
 term source code control. You lose things like renames and per-file
 commit history. I think users will be much better served by other tools
 bundling software together with a template, and I've always figured that
 something like Murano or Solum would do that. So that you'd instead have
 a template like this:


 my_sw_config:
   type: OS::Heat::StructuredConfig
   properties:
 config:
   create_config: {get_file: create_hook.sh}
   delete_config: {get_file: delete_hook.sh}

+1 on that! Actually, I posted a wordpress sample yesterday which exactly
does this:

https://review.openstack.org/#/c/89885/


 And if you're image-based, then you'd instead just bundle the image
 creation definition and not even need to include instructions on code
 injection in your template.

  When such a SoftwareConfig gets associated to a server via
  SoftwareDeployment, the SoftwareDeployment resource lifecycle
  implementation could trigger the respective hooks defined in
SoftwareConfig
  (if a hook is not defined, a no-op is performed). This way, all config
  related to one piece of software is nicely defined in one place.
 

 I don't believe nicely defined in one place is the right way to say
 this. I would call it an unmanageable chunk of yaml.

IMHO, an unmanageable chunk of yaml would be if the scripting is inlined.
If the scripts get referenced like mentioned above (I think we both agree
on that), it looks pretty neat to me.


 One point, there is no no-op. Heat's software config has an interface,
 and in-instance tools do the work.  I want to make sure we always stay
 true to that. I don't want Heat to assume anything about what happens
 inside instances. Heat exposes configs, and optionally waits for a
 signal. Nothing more.

Ok, maybe no-op was not the right term, or maybe over-simplifying it. So
what I meant was:
If a SoftwareConfig defined a 'create_config', the SoftwareDeployment would
set its state accordingly to CREATE_IN_PROGRESS and then wait for a signal
from the in-instance tool. The in-instance tool would to the actual work,
signal the SoftwareDeployment which goes to CREATE_COMPLETE and so on. Same
for other operations (delete, suspend, ...).
If a SoftwareConfig did not define a certain config - let's take
'suspend_config' now - the SoftwareDeployment (which can check the content
of the associated SoftwareConfig) can just complete the task directly, i.e.
for SUSPEND directly go to SUSPEND_COMPLETE and avoid the whole round trip
of waiting for a signal from the in-instance tool (which found out it does
not have to do anything).


 
  #2 Enable add-hoc actions on software components:
  Apart from basic resource lifecycle hooks, it would be desirable to
allow
  for invocation of add-hoc actions on software. Examples would be the
ad-hoc
  creation of DB backups, 

Re: [openstack-dev] [Nova][Neutron][Cinder][Heat]Should we support tags for os resources?

2014-04-23 Thread Thomas Spatzier
Excerpts from Jay Pipes' jaypi...@gmail.com message on 23/04/2014
01:43:42:

 On Tue, 2014-04-22 at 13:14 +0200, Thomas Spatzier wrote:
  snip
* Identify key/value pairs that are relied on by all of Nova to be a
   specific key and value combination, and make these things actual real
   attributes on some object model -- since that is a much greater guard
   for the schema of an object and enables greater performance by
allowing
   both type safety of the underlying data and removes the need to
search
   by both a key and a value.
 
  Makes a lot of sense to me. So are you suggesting to have a set of
  well-defined property names per resource but still store them in the
  properties name-value map? Or would you rather make those part of the
  resource schema?

 I'd rather have the common ones in the resource schema itself, since
 that is, IMHO, better practice for enforcing consistency and type
 safety.

+1, that's what I would prefer as well, but I wanted to make sure you mean
the same.


  BTW, here is a use case in the context of which we have been thinking
about
  that topic: we opened a BP for allowing constraint based selection of
  images for Heat templates, i.e. instead of saying something like (using
  pseudo template language)
 
  image ID must be in [fedora-19-x86_64, fedora-20-x86_64]
 
  say something like
 
  architecture must be x86_64, distro must be fedora, version must be
  between 19 and 20
 
  (see also [1]).
 
  This of course would require the existence of well-defined properties
in
  glance so an image selection query in Heat can work.

 Zactly :)

  As long as properties are just custom properties, we would require a
lot
  of discipline from every to maintain properties correctly.

 Yep, and you'd need to keep in sync with the code in Nova that currently
 maintains these properties. :)

 Best,
 -jay

   And the
  implementation in Heat could be kind of tolerant, i.e. give it a try,
and
  if the query fails just fail the stack creation. But if this is likely
to
  happen in 90% of all environments, the usefulness is questionable.
 
  Here is a link to the BP I mentioned:
  [1]
  https://blueprints.launchpad.net/heat/+spec/constraint-based-
 flavors-and-images
 
  Regards,
  Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Proposing Thomas Spatzier for heat-core

2014-04-23 Thread Thomas Spatzier
Hi all,

I just realized during a review that I see a couple of more options, so
this poll seems to be thru already ;-)

Thank you all for your support! I'm glad to be part of this great team!

Cheers,
Thomas

Liang Chen cbjc...@linux.vnet.ibm.com wrote on 23/04/2014 07:53:07:

 From: Liang Chen cbjc...@linux.vnet.ibm.com
 To: openstack-dev@lists.openstack.org
 Date: 23/04/2014 07:56
 Subject: Re: [openstack-dev] [Heat] Proposing Thomas Spatzier for
heat-core

 +1 !

 On 04/23/2014 02:43 AM, Zane Bitter wrote:
  Resending with [Heat] in the subject line. My bad.
 
  On 22/04/14 14:21, Zane Bitter wrote:
  I'd like to propose that we add Thomas Spatzier to the heat-core team.
 
  Thomas has been involved in and consistently contributing to the Heat
  community for around a year, since the time of the Havana design
summit.
  His code reviews are of extremely high quality IMO, and he has been
  reviewing at a rate consistent with a member of the core team[1].
 
  One thing worth addressing is that Thomas has only recently started
  expanding the focus of his reviews from HOT-related changes out into
the
  rest of the code base. I don't see this as an obstacle - nobody is
  familiar with *all* of the code, and we trust core reviewers to know
  when we are qualified to give +2 and when we should limit ourselves to
  +1 - and as far as I know nobody else is bothered either. However, if
  you have strong feelings on this subject nobody will take it
personally
  if you speak up :)
 
  Heat Core team members, please vote on this thread. A quick reminder
of
  your options[2]:
  +1  - five of these are sufficient for acceptance
0  - abstention is always an option
  -1  - this acts as a veto
 
  cheers,
  Zane.
 
 
  [1] http://russellbryant.net/openstack-stats/heat-reviewers-30.txt
  [2]
 
https://wiki.openstack.org/wiki/Heat/CoreTeam#Adding_or_Removing_Members
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Heat Orchestration

2014-04-23 Thread Thomas Spatzier
Hi Prabhu,


there a few HOT samples in the heat-templates repo [1] that show how
neutron networks get created or used within a HOT template. It might be
worth to check them out.


https://github.com/openstack/heat-templates/blob/master/hot/servers_in_existing_neutron_net.yaml

https://github.com/openstack/heat-templates/blob/master/hot/servers_in_existing_neutron_network_no_floating_ips.yaml

https://github.com/openstack/heat-templates/blob/master/hot/servers_in_new_neutron_net.yaml


General information on HOT (short guide [2] and spec [3]) are available as
part of the developer docs, which also provides a list of all the resources
(See OpenStack Resource Types in [4]) you can use in Heat templates.


Hope this helps.


[1] https://github.com/openstack/heat-templates
[2] http://docs.openstack.org/developer/heat/template_guide/hot_guide.html
[3] http://docs.openstack.org/developer/heat/template_guide/hot_spec.html
[4] http://docs.openstack.org/developer/heat/template_guide/


Regards,
Thomas



prabhuling kalyani prabhu...@gmail.com wrote on 23/04/2014 08:14:41:

 From: prabhuling kalyani prabhu...@gmail.com
 To: openstack@lists.openstack.org
 Date: 23/04/2014 08:26
 Subject: [Openstack] Heat Orchestration

 Hi,
 I am trying to understand the heat Orchestration phenomenon.
 I want to know if heat can create networks or if they need to be
 created upfront.
 Would help a lot if there is any documentation for HOT
 Thanks a lot
 - Prabhu___
 Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [Nova][Neutron][Cinder][Heat]Should we support tags for os resources?

2014-04-22 Thread Thomas Spatzier
Jay Pipes jaypi...@gmail.com wrote on 20/04/2014 15:05:51:

 From: Jay Pipes jaypi...@gmail.com
 To: openstack-dev@lists.openstack.org
 Date: 20/04/2014 15:07
 Subject: Re: [openstack-dev] [Nova][Neutron][Cinder][Heat]Should we
 support tags for os resources?

 On Sun, 2014-04-20 at 08:35 +, Huangtianhua wrote:
  Hi all:
 
  Currently, the EC2 API of OpenStack only has tags support (metadata)
  for instances. And there has already a blueprint about to add support
  for volumes and volume snapshots using “metadata”.
 
  There are a lot of resources such as
  image/subnet/securityGroup/networkInterface(port) are supported add
  tags for AWS.
 
  I think we should support tags for these resources. There may be no
  property “metadata for these resources, so we should to add
  “metadata” to support the resource tags, the change related API.

 Hi Tianhua,

 In OpenStack, generally, the choice was made to use maps of key/value
 pairs instead of lists of strings (tags) to annotate objects exposed in
 the REST APIs. OpenStack REST APIs inconsistently call these maps of
 key/value pairs:

  * properties (Glance, Cinder Image, Volume respectively)
  * extra_specs (Nova InstanceType)
  * metadata (Nova Instance, Aggregate and InstanceGroup, Neutron)
  * metadetails (Nova Aggregate and InstanceGroup)
  * system_metadata (Nova Instance -- differs from normal metadata in
 that the key/value pairs are 'owned' by Nova, not a user...)

 Personally, I think tags are a cleaner way of annotating objects when
 the annotation is coming from a normal user. Tags represent by far the
 most common way for REST APIs to enable user-facing annotation of
 objects in a way that is easy to search on. I'd love to see support for
 tags added to any searchable/queryable object in all of the OpenStack
 APIs.

Fully agree. Tags should be something a normal end user can use to make the
objects he is working with searchable for his purposes.
And this is likely something different from system-controlled properties
that _all_ users (not the one specific end user) can rely on.


 I'd also like to see cleanup of the aforementioned inconsistencies in
 how maps of key/value pairs are both implemented and named throughout
 the OpenStack APIs. Specifically, I'd like to see this implemented in
 the next major version of the Compute API:

+1 on making this uniform across the various projects. This would make it
much more intuitive.


  * Removal of the metadetails term
  * All key/value pairs can only be changed by users with elevated
 privileged system-controlled (normal users should use tags)

+1 on this, because this would be data that other users or projects rely on
- see also my use case below.

  * Call all these key/value pair combinations properties --
 technically, metadata is data about data, like the size of an
 integer. These key/value pairs are just data, not data about data.

+1 on properties

  * Identify key/value pairs that are relied on by all of Nova to be a
 specific key and value combination, and make these things actual real
 attributes on some object model -- since that is a much greater guard
 for the schema of an object and enables greater performance by allowing
 both type safety of the underlying data and removes the need to search
 by both a key and a value.

Makes a lot of sense to me. So are you suggesting to have a set of
well-defined property names per resource but still store them in the
properties name-value map? Or would you rather make those part of the
resource schema?

BTW, here is a use case in the context of which we have been thinking about
that topic: we opened a BP for allowing constraint based selection of
images for Heat templates, i.e. instead of saying something like (using
pseudo template language)

image ID must be in [fedora-19-x86_64, fedora-20-x86_64]

say something like

architecture must be x86_64, distro must be fedora, version must be
between 19 and 20

(see also [1]).

This of course would require the existence of well-defined properties in
glance so an image selection query in Heat can work.
As long as properties are just custom properties, we would require a lot
of discipline from every to maintain properties correctly. And the
implementation in Heat could be kind of tolerant, i.e. give it a try, and
if the query fails just fail the stack creation. But if this is likely to
happen in 90% of all environments, the usefulness is questionable.

Here is a link to the BP I mentioned:
[1]
https://blueprints.launchpad.net/heat/+spec/constraint-based-flavors-and-images

Regards,
Thomas


 Best,
 -jay



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Design summit preparation - Next steps for Heat Software Orchestration

2014-04-22 Thread Thomas Spatzier

Hi all,

following up on Zane's request from end of last week, I wanted to kick off
some discussion on the ML around a design summit session proposal titled 
Next steps for Heat Software Orchestration. I guess there will be things
that can be sorted out this way and others that can be refined so we can
have a productive session in Atlanta. I am basically copying the complete
contents of the session proposal below so we can iterate on various points.
If it turns out that we need to split off threads, we can do that at a
later point.

The session proposal itself is here:
http://summit.openstack.org/cfp/details/306

And here are the details:

With the Icehouse release, Heat includes implementation for software
orchestration (Kudos to Steve Baker and Jun Jie Nan) which enables clean
separation of any kind of software configuration from compute instances and
thus enables a great new set of features. The implementation for software
orchestration in Icehouse has probably been the major chunk of work to
achieve a first end-to-end flow for software configuration thru scripts,
Chef or Puppet, but there is more work to be done to enable Heat for more
software orchestration use cases beyond the current support.
Below are a couple of use cases, and more importantly, thoughts on design
options of how those use cases can be addressed.

#1 Enable software components for full lifecycle:
With the current design, software components defined thru SoftwareConfig
resources allow for only one config (e.g. one script) to be specified.
Typically, however, a software component has a lifecycle that is hard to
express in a single script. For example, software must be installed
(created), there should be support for suspend/resume handling, and it
should be possible to allow for deletion-logic. This is also in line with
the general Heat resource lifecycle.
By means of the optional 'actions' property of SoftwareConfig it is
possible today to specify at which lifecycle action of a SoftwareDeployment
resource the single config hook shall be executed at runtime. However, for
modeling complete handling of a software component, this would require a
number of separate SoftwareConfig and SoftwareDeployment resources to be
defined which makes a template more verbose than it would have to be.
As an optimization, SoftwareConfig could allow for providing several hooks
to address all default lifecycle operations that would then be triggered
thru the respective lifecycle actions of a SoftwareDeployment resource.
Resulting SoftwareConfig definitions could then look like the one outlined
below. I think this would fit nicely into the overall Heat resource model
for actions beyond stack-create (suspend, resume, delete). Furthermore,
this will also enable a closer alignment and straight-forward mapping to
the TOSCA Simple Profile YAML work done at OASIS and the heat-translator
StackForge project.

So in a short, stripped-down version, SoftwareConfigs could look like

my_sw_config:
  type: OS::Heat::SoftwareConfig
  properties:
create_config: # the hook for software install
suspend_config: # hook for suspend action
resume_config: # hook for resume action
delete_config: # hook for delete action

When such a SoftwareConfig gets associated to a server via
SoftwareDeployment, the SoftwareDeployment resource lifecycle
implementation could trigger the respective hooks defined in SoftwareConfig
(if a hook is not defined, a no-op is performed). This way, all config
related to one piece of software is nicely defined in one place.


#2 Enable add-hoc actions on software components:
Apart from basic resource lifecycle hooks, it would be desirable to allow
for invocation of add-hoc actions on software. Examples would be the ad-hoc
creation of DB backups, application of patches, or creation of users for an
application. Such hooks (implemented as scripts, Chef recipes or Puppet
facts) could be defined in the same way as basic lifecycle hooks. They
could be triggered by doing property updates on the respective
SoftwareDeployment resources (just a thought and to be discussed during
design sessions).
I think this item could help bridging over to some discussions raised by
the Murano team recently (my interpretation: being able to trigger actions
from workflows). It would add a small feature on top of the current
software orchestration in Heat and keep definitions in one place. And it
would allow triggering by something or somebody else (e.g. a workflow)
probably using existing APIs.


#3 address known limitations of Heat software orchestration
As of today, there already are a couple of know limitations or points where
we have seen the need for additional discussion and design work. Below is a
collection of such issues.
Maybe some are already being worked on; others need more discussion.

#3.1 software deployment should run just once:
A bug has been raised because with today's implementation it can happen
that SoftwareDeployments get executed multiple 

Re: [openstack-dev] [heat] [heat-templates] [qa] [tempest] Questions about images

2014-04-17 Thread Thomas Spatzier
 From: Steve Baker sba...@redhat.com
 To: openstack-dev@lists.openstack.org
 Date: 17/04/2014 00:55
 Subject: Re: [openstack-dev] [heat] [heat-templates] [qa] [tempest]
 Questions about images

 On 17/04/14 09:11, Thomas Spatzier wrote:
  From: Mike Spreitzer mspre...@us.ibm.com
  To: OpenStack Development Mailing List \(not for usage questions\)
  openstack-dev@lists.openstack.org
  Date: 16/04/2014 19:58
  Subject: Re: [openstack-dev] [heat] [heat-templates] [qa] [tempest]
  Questions about images
 
  Steven Hardy sha...@redhat.com wrote answers to most of my
questions.
 
  To clarify, my concern about URLs and image names is not so much for
  the sake of a person browsing/writing but rather because I want
  programs, scripts, templates, and config files (e.g., localrc for
  DevStack) to all play nice together (e.g., not require a user to
  rename any images or hack any templates).  I think Steve was
  thinking along the same lines when he reiterated the URL he uses in
  localrc and wrote:
 
  We should use the default name that devstack uses in glance, IMO, e.g
 
  fedora-20.x86_64
  FWIW, instead of specifying allowed image names in a template it might
be a
  better approach to allow for specifying constraints against the image
(e.g.
  distro is fedora, or distro is ubuntu, version between 12.04 and 13.04
etc)
  and then use metadata in glance to select the right image. Of course
this
  would require some discipline to maintain metadata and we would have to
  agree on mandatory attributes and values in it (I am sure there is at
least
  one standard with proposed options), but it would make templates more
  portable ... or at least the author could specify more clearly under
which
  environments he/she thinks the template will work.
 
  There is a blueprint which goes in this direction:
  https://blueprints.launchpad.net/heat/+spec/constraint-based-
 flavors-and-images
 
 This would be good, but being able to store and query this metadata from
 glance would be a prerequisite for doing this in heat.

 Can you point to the glance blueprints which would enable this heat
 blueprint?

Sure, we will add references to corresponding glance BPs, since as you say
they are a pre-req.
We'll update the BP in the next couple of days.

Thomas


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] computed package names?

2014-04-16 Thread Thomas Spatzier
 From: Steven Dake sd...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 16/04/2014 01:43
 Subject: Re: [openstack-dev] [heat] computed package names?

 On 04/15/2014 03:45 PM, Zane Bitter wrote:
  On 15/04/14 17:59, Mike Spreitzer wrote:
  Clint Byrum cl...@fewbar.com wrote on 04/15/2014 05:17:21 PM:
Excerpts from Zane Bitter's message of 2014-04-15 13:32:30 -0700:
   
 FWIW, in the short term I'm not aware of any issue with
installing
 mariadb in Fedora 17/18, provided that mysql is not installed
  first. And
 in fact they're both EOL anyway, so we should probably migrate
  all the
 templates to Fedora 20 and mariadb.
   

 The last time I tried F17 images, the database installation step failed
 miserably because of problems in the base distribution.

+1 for that.
 
  I count 22 templates in heat-templates that are written to support
  Fedora, Ubuntu, and RHEL; is MariaDB available in those?  I do not see
  it in Ubuntu 12.10, for example.
 
  I imagine it's a problem for RHEL (can RHEL 7 just get released
  already?). Ubuntu is not an issue though, unless they have adopted yum
  while I was not looking.
 
  Checking a random sample, they only includes yum and systemd
  sections (no apt or sysvinit) in the metadata, so the purported
  support for Ubuntu 10.04 is just due to copy-paste and isn't actually
  implemented.
 
  There is, I thought, one template that does actually support Ubuntu.
  Stuff in the F17 directory is there for a reason.
 
 The only rational reason for having the F17 directory is because nobody
 has done the work of porting these templates to F20.  That work needs to
 be done before we can remove the F17 directory permanently :)

+1 on porting existing templates to current distros. And it would be even
better to start using some of the new features like software config.
I have actually started on a WordPress example that uses software config
and should be to post this for review soon.

Regards,
Thomas


 Regards,
 -steve

  - ZB
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] computed package names?

2014-04-16 Thread Thomas Spatzier
 From: Zane Bitter zbit...@redhat.com
 To: openstack-dev@lists.openstack.org
 Date: 16/04/2014 00:46
 Subject: Re: [openstack-dev] [heat] computed package names?

 On 15/04/14 17:59, Mike Spreitzer wrote:
  Clint Byrum cl...@fewbar.com wrote on 04/15/2014 05:17:21 PM:
Excerpts from Zane Bitter's message of 2014-04-15 13:32:30 -0700:
   
 FWIW, in the short term I'm not aware of any issue with installing
 mariadb in Fedora 17/18, provided that mysql is not installed
  first. And
 in fact they're both EOL anyway, so we should probably migrate all
the
 templates to Fedora 20 and mariadb.
   
+1 for that.
 
  I count 22 templates in heat-templates that are written to support
  Fedora, Ubuntu, and RHEL; is MariaDB available in those?  I do not see
  it in Ubuntu 12.10, for example.

 I imagine it's a problem for RHEL (can RHEL 7 just get released
 already?). Ubuntu is not an issue though, unless they have adopted yum
 while I was not looking.

 Checking a random sample, they only includes yum and systemd
 sections (no apt or sysvinit) in the metadata, so the purported
 support for Ubuntu 10.04 is just due to copy-paste and isn't actually
 implemented.

IMO, it would be desirable to not have things like yum or apt appear in the
template explicitly. For many packages it seems like at least the top level
package names (not including distro specific versioning strings) are equal
across distros so when specified in a template it should be possible for a
software deployment hook (which can be distro specific) to figure out how
to install the package.

Regards,
Thomas


 There is, I thought, one template that does actually support Ubuntu.
 Stuff in the F17 directory is there for a reason.

 - ZB

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Murano] [Solum] applications in the cloud

2014-04-16 Thread Thomas Spatzier
Hi Ruslan,

 From: Ruslan Kamaldinov rkamaldi...@mirantis.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 16/04/2014 00:38
 Subject: Re: [openstack-dev] [Heat] [Murano] [Solum] applications inthe
cloud

 Update:
 Stan filed a blueprint [0] for type interfaces in HOT.


 I would like to outline the current vision of Murano Application format,
to
 make sure we're all on the same page. We had a valuable discussion in
several
 MLs and we also had a lot of discussions between Murano team members. As
a
 result of these discussion we see the following plan for Murano:

 * Adopt TOSCA for application definition in Murano. Work with TOSCA
committee
 * Utilize Heat as much as possible. Participate in discussions and
   implementations for hooks mechanism in Heat. Participate in HOT format
   discussions

Those two sound great. For the version of TOSCA we are currently defining,
the more concrete implementation input we get, the better. So your
collaboration is more than welcome. And putting things into Heat that make
sense to be put into Heat is also a good plan, since this gives us a common
code base instead of duplicated efforts.

It would be good to get together at the next summit and discuss some of
this. We also started implementation of a TOSCA YAML library and a
translator to HOT as a stackforge project, and we are thinking about how a
TOSCA layer actually fits into the overall picture. Would be good to talk
about this with key stackholders. I actually submitted a sessions proposal
for one of the open source @ OpenStack slots to get a room and time slot,
but have not heard back yet.

 * Adopt Mistral DSL for workflow management. Find a way to compose Heat
   templates and Mistral DSL

Also makes sense, and this is another item where we have some interest from
a TOSCA perspective.

Regards,
Thomas

 * Keep MuranoPL engine in the source tree to take care of all the
features we
   need for our users until those features are implemented on top things
   described above

 Murano, Heat teams, please let me know if this plan sounds sane to you.

 [0] https://blueprints.launchpad.net/heat/+spec/interface-types

 Thanks,
 Ruslan

 On Sat, Apr 5, 2014 at 12:25 PM, Thomas Spatzier
 thomas.spatz...@de.ibm.com wrote:
  Clint Byrum cl...@fewbar.com wrote on 04/04/2014 19:05:04:
  From: Clint Byrum cl...@fewbar.com
  To: openstack-dev openstack-dev@lists.openstack.org
  Date: 04/04/2014 19:06
  Subject: Re: [openstack-dev] [Heat] [Murano] [Solum] applications
inthe
  cloud
 
  Excerpts from Stan Lagun's message of 2014-04-04 02:54:05 -0700:
   Hi Steve, Thomas
  
   I'm glad the discussion is so constructive!
  
   If we add type interfaces to HOT this may do the job.
   Applications in AppCatalog need to be portable across OpenStack
clouds.
   Thus if we use some globally-unique type naming system applications
  could
   identify their dependencies in unambiguous way.
  
   We also would need to establish relations between between
interfaces.
   Suppose there is My::Something::Database interface and 7 compatible
   materializations:
   My::Something::TroveMySQL
   My::Something::GaleraMySQL
   My::Something::PostgreSQL
   My::Something::OracleDB
   My::Something::MariaDB
   My::Something::MongoDB
   My::Something::HBase
  
   There are apps that (say SQLAlchemy-based apps) are fine with any
   relational DB. In that case all templates except for MongoDB and
HBase
   should be matched. There are apps that design to work with MySQL
only.
  In
   that case only TroveMySQL, GaleraMySQL and MariaDB should be
matched.
  There
   are application who uses PL/SQL and thus require OracleDB (there can
be
   several Oracle implementations as well). There are also applications
   (Marconi and Ceilometer are good example) that can use both some SQL
  and
   NoSQL databases. So conformance to Database interface is not enough
and
   some sort of interface hierarchy required.
 
  IMO that is not really true and trying to stick all these databases
into
  one SQL database interface is not a use case I'm interested in
  pursuing.
 
  Far more interesting is having each one be its own interface which
apps
  can assert that they support or not.
 
  Agree, this looks like a feasible goal and would already bring some
value
  add in that one could look up appropriate provider templates instead of
  having to explicitly link them in environments.
  Doing too much of inheritance sounds interesting at first glance but
  burries a lot of complexity.
 
 
  
   Another thing that we need to consider is that even compatible
   implementations may have different set of parameters. For example
   clustered-HA PostgreSQL implementation may require additional
  parameters
   besides those needed for plain single instance variant. Template
that
   consumes *any* PostgreSQL will not be aware of those additional
  parameters.
   Thus they need to be dynamically added to environment's input

Re: [openstack-dev] [heat] [heat-templates] [qa] [tempest] Questions about images

2014-04-16 Thread Thomas Spatzier
 From: Mike Spreitzer mspre...@us.ibm.com
 To: OpenStack Development Mailing List \(not for usage questions\)
 openstack-dev@lists.openstack.org
 Date: 16/04/2014 19:58
 Subject: Re: [openstack-dev] [heat] [heat-templates] [qa] [tempest]
 Questions about images

 Steven Hardy sha...@redhat.com wrote answers to most of my questions.

 To clarify, my concern about URLs and image names is not so much for
 the sake of a person browsing/writing but rather because I want
 programs, scripts, templates, and config files (e.g., localrc for
 DevStack) to all play nice together (e.g., not require a user to
 rename any images or hack any templates).  I think Steve was
 thinking along the same lines when he reiterated the URL he uses in
 localrc and wrote:

  We should use the default name that devstack uses in glance, IMO, e.g
 
  fedora-20.x86_64

FWIW, instead of specifying allowed image names in a template it might be a
better approach to allow for specifying constraints against the image (e.g.
distro is fedora, or distro is ubuntu, version between 12.04 and 13.04 etc)
and then use metadata in glance to select the right image. Of course this
would require some discipline to maintain metadata and we would have to
agree on mandatory attributes and values in it (I am sure there is at least
one standard with proposed options), but it would make templates more
portable ... or at least the author could specify more clearly under which
environments he/she thinks the template will work.

There is a blueprint which goes in this direction:
https://blueprints.launchpad.net/heat/+spec/constraint-based-flavors-and-images

Regards,
Thomas


 Steve also referred to /hot/F20/WordPress_Native.yaml in heat-
 templates.  That template is a counter-example: for the image_id
 parameter it says

   - allowed_values: [ Fedora-i386-20-20131211.1-sda, Fedora-
 x86_64-20-20131211.1-sda ]

 I'm going to assume that Steve and others agree that the allowed
 values constraint in this and similar templates should be revised to
 follow the pattern exemplified by fedora-20.x84_64.  Or should it
 be liberalized to not be so prescriptive?  I'll go even a step
 further and say that there should be a default value and it should
 be the 64-bit value (I am thinking ahead to automating testing of
 heat-templates).

 Thanks,
 Mike___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Managing changes to the Hot Specification (hot_spec.rst)

2014-04-08 Thread Thomas Spatzier
 From: Steven Dake sd...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 07/04/2014 21:45
 Subject: Re: [openstack-dev] [heat] Managing changes to the Hot
 Specification (hot_spec.rst)

 On 04/07/2014 11:01 AM, Zane Bitter wrote:
  On 06/04/14 14:23, Steven Dake wrote:
  Hi folks,
 
  There are two problems we should address regarding the growth and
change
  to the HOT specification.
 
  First our +2/+A process for normal changes doesn't totally make sense
  for hot_spec.rst.  We generally have some informal bar for
controversial
  changes (of which changes to hot_spec.rst is generally considered:).
I
  would suggest raising the bar on hot_spec.rst to at-least what is
  required for a heat-core team addition (currently 5 approval votes).
  This gives folks plenty of time to review and make sure the heat core
  team is committed to the changes, rather then a very small 2 member
  subset.  Of course a -2 vote from any heat-core would terminate the
  review as usual.
 
  Second, There is a window where we say hey we want this sweet new
  functionality yet it remains unimplemented.  I suggest we create
some
  special tag for these intrinsics/sections/features, so folks know they
  are unimplemented and NOT officially part of the specification until
  that is the case.
 
  We can call this tag something simple like
  *standardization_pending_implementation* for each section which is
  unimplemented.  A review which proposes this semantic is here:
  https://review.openstack.org/85610
 
  This part sounds highly problematic to me.
 
  I agree with you and Thomas S that using Gerrit to review proposed
  specifications is a nice workflow, even if the proper place to do
  this is on the wiki and linked to a blueprint. I would probably go
  along with everything you suggested provided that anything pending
  implementation goes in a separate file or files that are _not_
  included in the generated docs.
 

Yeah, this would be optimal to be able to use gerrit for shaping it but
excluding it from the published docs.

 This is a really nice idea.  We could have a hot_spec_pending.rst which
 lists out the pending ideas so we can have a gerrit review of this doc.
 The doc wouldn't be generated into the externally rendered documentation.

 We could still use blueprints before/after the discussion is had on the
 hot_spec_pending.rst doc, but hot_spec_pending.rst would allow us to
 collaborate properly on the changes.

This could be a pragmatic option. What would be even better would be to
somehow flag sections in hot_spec.rst so they do not get included in the
published docs. This way, we would be able to continuesly merge changes
that come in while features are being implemented (typo fixes,
clarifications of existing public spec etc).

Has someone tried this out already? I read there is something like this for
rst:

.. options
   exclude can this take sections?


 The problem I have with blueprints is they suck for collaborative
 discussion, whereas gerrit rocks for this purpose.  In essence, I just
 want a tidier way to discuss the changes then blueprints provide.

Fully agree. Gerrit is nice for collaboration and enforces discipline.
While BPs and wiki are good but require everyone to really _be_
disciplined ;-)


 Other folks on this thread, how do you feel about this approach?

 Regards
 -steve
  cheers,
  Zane.
 
  My goal is not to add more review work to people's time, but I really
  believe any changes to the HOT specification have a profound impact on
  all things Heat, and we should take special care when considering
these
  changes.
 
  Thoughts or concerns?
 
  Regards,
  -steve
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Murano] [Solum] applications in the cloud

2014-04-05 Thread Thomas Spatzier
Clint Byrum cl...@fewbar.com wrote on 04/04/2014 19:05:04:
 From: Clint Byrum cl...@fewbar.com
 To: openstack-dev openstack-dev@lists.openstack.org
 Date: 04/04/2014 19:06
 Subject: Re: [openstack-dev] [Heat] [Murano] [Solum] applications inthe
cloud

 Excerpts from Stan Lagun's message of 2014-04-04 02:54:05 -0700:
  Hi Steve, Thomas
 
  I'm glad the discussion is so constructive!
 
  If we add type interfaces to HOT this may do the job.
  Applications in AppCatalog need to be portable across OpenStack clouds.
  Thus if we use some globally-unique type naming system applications
could
  identify their dependencies in unambiguous way.
 
  We also would need to establish relations between between interfaces.
  Suppose there is My::Something::Database interface and 7 compatible
  materializations:
  My::Something::TroveMySQL
  My::Something::GaleraMySQL
  My::Something::PostgreSQL
  My::Something::OracleDB
  My::Something::MariaDB
  My::Something::MongoDB
  My::Something::HBase
 
  There are apps that (say SQLAlchemy-based apps) are fine with any
  relational DB. In that case all templates except for MongoDB and HBase
  should be matched. There are apps that design to work with MySQL only.
In
  that case only TroveMySQL, GaleraMySQL and MariaDB should be matched.
There
  are application who uses PL/SQL and thus require OracleDB (there can be
  several Oracle implementations as well). There are also applications
  (Marconi and Ceilometer are good example) that can use both some SQL
and
  NoSQL databases. So conformance to Database interface is not enough and
  some sort of interface hierarchy required.

 IMO that is not really true and trying to stick all these databases into
 one SQL database interface is not a use case I'm interested in
 pursuing.

 Far more interesting is having each one be its own interface which apps
 can assert that they support or not.

Agree, this looks like a feasible goal and would already bring some value
add in that one could look up appropriate provider templates instead of
having to explicitly link them in environments.
Doing too much of inheritance sounds interesting at first glance but
burries a lot of complexity.


 
  Another thing that we need to consider is that even compatible
  implementations may have different set of parameters. For example
  clustered-HA PostgreSQL implementation may require additional
parameters
  besides those needed for plain single instance variant. Template that
  consumes *any* PostgreSQL will not be aware of those additional
parameters.
  Thus they need to be dynamically added to environment's input
parameters
  and resource consumer to be patched to pass those parameters to actual
  implementation
 

 I think this is a middleware pipe-dream and the devil is in the details.

 Just give users the ability to be specific, and then generic patterns
 will arise from those later on.

 I'd rather see a focus on namespacing and relative composition, so that
 providers of the same type that actually do use the same interface but
 are alternate implementations will be able to be consumed. So for
instance
 there is the non-Neutron LBaaS and the Neutron LBaaS, and both have their
 merits for operators, but are basically identical from an application
 standpoint. That seems a better guiding use case than different
databases.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Murano] [Solum] applications in the cloud

2014-04-04 Thread Thomas Spatzier
Hi Steve,

your indexing idea sounds interesting, but I am not sure it would work
reliably. The kind of matching based on names of parameters and outputs and
internal get_attr uses has very strong assumptions and I think there is a
not so low risk of false positives. What if the templates includes some
internal details that would not affect the matching but still change the
behavior in a way that would break the composition. Or what if a user by
chance built a template that by pure coincidence uses the same parameter
and output names as one of those abstract types that was mention, but the
template is simply not built for composition?

I think it would be much cleaner to have an explicit attribute in the
template that says this template can be uses as realization of type
My::SomeType and use that for presenting the user choice and building the
environment.

Regards,
Thomas

Steve Baker sba...@redhat.com wrote on 04/04/2014 06:12:38:
 From: Steve Baker sba...@redhat.com
 To: openstack-dev@lists.openstack.org
 Date: 04/04/2014 06:14
 Subject: Re: [openstack-dev] [Heat] [Murano] [Solum] applications inthe
cloud

 On 03/04/14 13:04, Georgy Okrokvertskhov wrote:
 Hi Steve,

 I think this is exactly the place where we have a boundary between
 Murano catalog and HOT.

 In your example one can use abstract resource type and specify a
 correct implementation via environment file. This is how it will be
 done on the final stage in Murano too.

 Murano will solve another issue. In your example user should know
 what template to use as a provider template. In Murano this will be
 done in the following way:
 1) User selects an app which requires a DB
 2) Murano sees this requirement for DB and do a search in the app
 catalog to find all apps which expose this functionality. Murano
 uses app package definitions for that.
 3) User select in UI specific DB implementation he wants to use.

 As you see, in Murano case user has no preliminary knowledge of
 available apps\templates and it uses catalog to find it. A search
 criteria can be quite complex with using different application
 attribute. If we think about moving application definition to HOT
 format it should provide all necessary information for catalog.

 In order to search apps in catalog which uses HOT format we need
 something like that:

 One needs to define abstract resource like
 OS:HOT:DataBase

 Than in each DB implementation of DB resource one has to somehow
 refer this abstract resource as a parent like

 Resource OS:HOT:MySQLDB
   Parent: OS:HOT:DataBase

 Then catalog part can use this information and build a list of all
 apps\HOTs with resources with parents OS:HOT:DataBase

 That is what we are looking for. As you see, in this example I am
 not talking about version and other attributes which might be
 required for catalog.


 This sounds like a vision for Murano that I could get behind. It
 would be a tool which allows fully running applications to be
 assembled and launched from a catalog of Heat templates (plus some
 app lifecycle workflow beyond the scope of this email).

 We could add type interfaces to HOT but I still think duck typing
 would be worth considering. To demonstrate, lets assume that when a
 template gets cataloged, metadata is also indexed about what
 parameters and outputs the template has. So for the case above:
 1) User selects an app to launch from the catalog
 2) Murano performs a heat resource-type-list and compares that with
 the types in the template. The resource-type list is missing
 My::App::Database for a resource named my_db
 3) Murano analyses the template and finds that My::App::Database is
 assigned 2 properties (db_username, db_password) and elsewhere in
 the template is a {get_attr: [my_db, db_url]} attribute access.
 4) Murano queries glance for templates, filtering by templates which
 have parameters [db_username, db_password] and outputs [db_url]
 (plus whatever appropriate metadata filters)
 5) Glance returns 2 matches. Murano prompts the user for a choice
 6) Murano constructs an environment based on the chosen template,
 mapping My::App::Database to the chosen template
 7) Murano launches the stack

 Sure, there could be a type interface called My::App::Database which
 declares db_username, db_password and db_url, but since a heat
 template is in a readily parsable declarative format, all required
 information is available to analyze, both during glance indexing and
 app launching.




 On Wed, Apr 2, 2014 at 3:30 PM, Steve Baker sba...@redhat.com wrote:
 On 03/04/14 10:39, Ruslan Kamaldinov wrote:
  This is a continuation of the MuranoPL questions thread.
 
  As a result of ongoing discussions, we figured out that definitionof
layers
  which each project operates on and has responsibility for is not yet
agreed
  and discussed between projects and teams (Heat, Murano, Solum (in
  alphabetical order)).
 
  Our suggestion and expectation from this working group is to have such
  a definition of layers, 

Re: [openstack-dev] [heat] metadata for a HOT

2014-04-03 Thread Thomas Spatzier
 From: Mike Spreitzer mspre...@us.ibm.com
 To: OpenStack Development Mailing List \(not for usage questions\)
 openstack-dev@lists.openstack.org
 Date: 03/04/2014 07:10
 Subject: Re: [openstack-dev] [heat] metadata for a HOT

 Zane Bitter zbit...@redhat.com wrote on 04/02/2014 05:36:43 PM:

  I think that if you're going to propose a new feature, you should at
  least give us a clue who you think is going to use it and what for ;)

 I was not eager to do that yet because I have not found a fully
 satisfactory answer yet, at this point I am exploring options.  But
 the problem I am thinking about is how Heat might connect to a
 holistic scheduler (a scheduler that makes a joint decision about a
 bunch of resources of various types).  Such a scheduler needs input
 describing the things to be scheduled and the policies to apply in
 scheduling; the first half of that sounds a lot like a Heat
 template, so my thoughts go in that direction.  But the HOT language
 today (since https://review.openstack.org/#/c/83758/ was merged)
 does not have a place to put policy that is not specific to a
singleresource.

I think you bring up a specific use case here, i.e. applying policies for
placement/scheduling when deploying a stack. This is just a thought, but I
wonder whether it would make more sense to then define a specific extension
to HOT instead of having a generic metadata section and stuffing everything
that does not fit into other places into metadata.

I mean, the use case Keith brought up are completely different (UI and user
related), and I understand both use cases. But is the idea to put just
everything into metadata, or would different classes of use cases justify
different section? The latter would enforce better documentation of
semantics. If everyhing goes into a metadata section, the contents also
need to be clearly specified. Otherwise, the resulting template won't be
portable. Ok, the standard HOT stuff will be portable, but not the
metadata, so no two users will be able to interpret it the same way.


  IIRC this has been discussed in the past and the justifications for
  including it in the template (as opposed to allowing metadata to be
  attached in the ReST API, as other projects already do for many things)

  were not compelling.

 I see that Keith Bray mentioned https://wiki.openstack.org/wiki/
 Heat/StackMetadata and https://wiki.openstack.org/wiki/Heat/UI in
 another reply on this thread.  Are there additional places to look
 to find that discussion?

 I have also heard that there has been discussion of language
 extension issues.  Is that a separate discussion and, if so, where
 can I read it?

 Thanks,
 Mike
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Some thoughts on the mapping section

2014-04-03 Thread Thomas Spatzier
Excerpts from Thomas Herve's message on 03/04/2014 09:21:05:
 From: Thomas Herve thomas.he...@enovance.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 03/04/2014 09:21
 Subject: Re: [openstack-dev] [Heat] Some thoughts on the mapping section


snip


  Speaking of offering options for selection, there is another proposal
on
  adding conditional creation of resources [3], whose use case to enable
  or disable a resource creation (among others).  My perception is that
  these are all relevant enhancements to the reusability of HOT
templates,
  though I don't think we really need very sophisticated combinatory
  conditionals.

 I think that's interesting that you mentioned that, because Zane
 talked about a variables section, which would encompass what
 conditions and mappings mean. That's why we're discussing
 extensively about those design points, to see where we can be a bit
 more generic to handle more use cases.

+1 on bringing those suggestions together. It seems to me like there is
quite some overlap of what mappings and variables shall solve, so it
would be nice to have one solution for it. As you mentioned earlier, the
objection against mappings was not that CFN had it and we didn't want to
have it, but because the use case did not sell well. If there are things
that make sense, no objection, but maybe we can do it smarter in HOT ;-)

Regards,
Thomas


 Cheers,

 --
 Thomas

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] metadata for a HOT

2014-04-03 Thread Thomas Spatzier
 From: Keith Bray keith.b...@rackspace.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 03/04/2014 19:51
 Subject: Re: [openstack-dev] [heat] metadata for a HOT

 Steve, agreed.  Your description I believe is the conclusion that
 the community came to when this was perviously discussed, and we
 managed to get the implementation of parameter grouping and ordering
 [1] that you mentioned which has been very helpful.  I don't think
 we landed the keywords blueprint [2], which may be controversial
 because it is essentially unstructured. I wanted to make sure Mike
 had the links for historical context, but certainly understand and
 appreciate your point of view here.  I wasn't able to find the email
 threads to point Mike to, but assume they exist in the list archives
 somewhere.

 We proposed another specific piece of template data [3] which I
 can't remember whether it was met with resistance or we just didn't
 get to implementing it since we knew we would have to store other
 data specific to our uses cases in other files anyway.   We decided
 to go with storing our extra information in a catalog (really just a
 Git repo with a README.MD [4]) for now  until we can implement
 acceptable catalog functionality somewhere like Glance, hopefully in
 the Juno cycle.  When we want to share the template, we share all
 the files in the repo (inclusive of the README.MD).  It would be
 more ideal if we could share a single file (package) inclusive of
 the template and corresponding help text and any other UI hint info
 that would helpful.  I expect service providers to have differing

I agree that packaging all stuff that makes up a template (which will in
many cases not be a single template file, but nested templates,
environments, scripts, ...) in one archive. We have this concept in TOSCA
and I am sure we will have to implement a solution for this as part of the
TOSCA YAML to HOT converter work that we started. If several people see
this requirement, let's see if we can join forces on a common solution.

 views of the extra data they want to store with a template... So
 it'd just be nice to have a way to account for service providers to
 store their unique data along with a template that is easy to share
 and is part of the template package.  We bring up portability and
 structured data often, but I'm starting to realize that portability
 of a template breaks down unless every service provider runs exactly
 the same Heat resources, same image IDs, flavor types, etc.). I'd
 like to drive more standardization of data for image and template
 data into Glance so that in HOT we can just declare things like
 Linux, Flavor Ubuntu, latest LTS, minimum 1Gig and automatically
 discover and choose the right image to provision, or error if a
 suitable match can not be found.  The Murano team has been hinting

Sahdev from our team recently created a BP for exactly that scenario.
Please have a look and see if that is in line with your thinking and
provide comments as necessary:

https://blueprints.launchpad.net/heat/+spec/constraint-based-flavors-and-images

 at wanting to solve a similar problem, but with a broader vision
 from a complex-multi application declaration perspective that
 crosses multiple templates or is a layer above just matching to what
 capabilities Heat resources provide and matching against
 capabilities that a catalog of templates provide (and mix that with
 capabilities the cloud API services provide).  I'm not yet convinced
 that can't be done with a parent Heat template since we already have
 the declarative constructs and language well defined, but I
 appreciate the use case and perspective those folks are bringing to
 the conversation.

 [1]
https://blueprints.launchpad.net/heat/+spec/parameter-grouping-ordering
  https://wiki.openstack.org/wiki/Heat/UI#Parameter_Grouping_and_Ordering

 [2] https://blueprints.launchpad.net/heat/+spec/stack-keywords
 https://wiki.openstack.org/wiki/Heat/UI#Stack_Keywords

 [3] https://blueprints.launchpad.net/heat/+spec/add-help-text-to-template
 https://wiki.openstack.org/wiki/Heat/UI#Help_Text

 [4] Ex. Help Text accompanying a template in README.MD format:
 https://github.com/rackspace-orchestration-templates/docker

 -Keith

 From: Steven Dake sd...@redhat.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)

 openstack-dev@lists.openstack.org
 Date: Thursday, April 3, 2014 10:30 AM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [heat] metadata for a HOT

 On 04/02/2014 08:41 PM, Keith Bray wrote:
 https://wiki.openstack.org/wiki/Heat/StackMetadata

 https://wiki.openstack.org/wiki/Heat/UI

 -Keith

 Keith,

 Taking a look at the UI specification, I thought I'd take a look at
 adding parameter grouping and ordering to the hot_spec.rst file.
 That seems like a really nice constrained use case with a 

Re: [openstack-dev] [Heat] Some thoughts on the mapping section

2014-04-03 Thread Thomas Spatzier
 From: Zane Bitter zbit...@redhat.com
 To: openstack-dev@lists.openstack.org
 Date: 03/04/2014 22:09
 Subject: Re: [openstack-dev] [Heat] Some thoughts on the mapping section

 On 03/04/14 03:21, Thomas Herve wrote:
  Speaking of offering options for selection, there is another proposal
on
  adding conditional creation of resources [3], whose use case to
enable
  or disable a resource creation (among others).  My perception is that
  these are all relevant enhancements to the reusability of HOT
templates,
  though I don't think we really need very sophisticated combinatory
  conditionals.
  I think that's interesting that you mentioned that, because Zane
 talked about a variables section, which would encompass what
 conditions and mappings mean. That's why we're discussing
 extensively about those design points, to see where we can be a bit
 more generic to handle more use cases.

 There was some discussion in the review[1] of having an if/then function
 (equivalent of the ternary ?: operator in C) for calculating variable...
 on reflection that is nothing more than a dumbed down version of
 Fn::Select in CloudFormation (which we have no equivalent to in HOT) in
 which the only possible index values are true and false.

 The differences between Fn::Select and Fn::FindInMap are:

 1) The bizarre double-indirect lookup, of course; and
 2) The actual mappings are defined once in a single place, rather than
 everywhere you need to access them.

 I think we're all agreed that (1) is undesirable in itself. It occurs to
 me that the existence of a variables section could render (2) moot also
 (since you could calculate the result in one place, and just reference
 it from there on).

 So if we had the variables section, we probably no longer need to
 consider a mapping section and a replacement for Fn::FindInMap, just a
 replacement for Fn::Select that could also cover the if/then use case.

 Thoughts?

+1 for solving this in one place and coming up with such a solution that
introduces just one new thing to solve problems that are addressed with
two different things in CFN.

Regards,
Thomas


 cheers,
 Zane.

 [1] https://review.openstack.org/84468

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-26 Thread Thomas Spatzier
Hi Dimitry,

the current working draft for the simplified profile in YAML is available
at [1]. Note that this is still work in progress, but should already give a
good impression of where we want to go. And as I said, we are open for
input.

The stackforge project [2] that Sahdev from our team created is in its
final setup phase (gerrit review still has to be set up), as far as I
understood it. My information is that parser code for TOSCA YAML according
to the current working draft is going to get in by end of the week. This
code is currently maintained in Sahdev's own github repo [3]. Sahdev (IRC
spzala) would be the best contact for the moment when it comes to detail
questions on the code.

[1]
https://www.oasis-open.org/committees/document.php?document_id=52571wg_abbrev=tosca
[2] https://github.com/stackforge/heat-translator
[3] https://github.com/spzala/heat-translator

Regards,
Thomas

 From: Dmitry mey...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 26/03/2014 11:17
 Subject: Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

 Hi Thomas,
 Can you share some documentation of what you're doing right now with
 TOSCA-compliant layer?
 We would like to join to this effort.

 Thanks,
 Dmitry


 On Wed, Mar 26, 2014 at 10:38 AM, Thomas Spatzier
thomas.spatz...@de.ibm.com
  wrote:
 Excerpt from Zane Bitter's message on 26/03/2014 02:26:42:

  From: Zane Bitter zbit...@redhat.com
  To: openstack-dev@lists.openstack.org
  Date: 26/03/2014 02:27
  Subject: Re: [openstack-dev] [Murano][Heat] MuranoPL questions?
 

 snip

   Cloud administrators are usually technical guys that are capable of
   learning HOT and writing YAML templates. They know exact
configuration
   of their cloud (what services are available, what is the version of
   OpenStack cloud is running) and generally understands how OpenStack
   works. They also know about software they intent to install. If such
 guy
   wants to install Drupal he knows exactly that he needs HOT template
   describing Fedora VM with Apache + PHP + MySQL + Drupal itself. It is
   not a problem for him to write such HOT template.
 
  I'm aware that TOSCA has these types of constraints, and in fact I
  suggested to the TOSCA TC that maybe this is where we should draw the
  line between Heat and some TOSCA-compatible service: HOT should be a
  concrete description of exactly what you're going to get, whereas some
  other service (in this case Murano) would act as the constraints
solver.
  e.g. something like an image name would not be hardcoded in a Murano
  template, you have some constraints about which operating system and
  what versions should be allowed, and it would pick one and pass it to
  Heat. So I am interested in this approach.

 I can just support Zane's statements above. We are working on exactly
those
 issues in the TOSCA YAML definition, so it would be ideal to just
 collaborate on this. As Zane said, there currently is a thinking that
some
 TOSCA-compliant layer could be a (maybe thin) layer above Heat that
 resolves a more abstract (thus more portable) template into something
 concrete, executable. We have started developing code (early versions are
 on stackforge already) to find out the details.

 
  The worst outcome here would be to end up with something that was
  equivalent to TOSCA but not easily translatable to the TOSCA Simple
  Profile YAML format (currently a Working Draft). Where 'easily
  translatable' preferably means 'by just changing some names'. I can't
  comment on whether this is the case as things stand.
 

 The TOSCA Simple Profile in YAML is a working draft at the moment, so we
 are pretty much open for any input. So let's see to get the right folks
 together and get it right. Since the Murano folks have indicated before
 that they are evaluating the option to join the OASIS TC, I am optimistic
 that we can get the streams together. Having implementation work going on
 here in this community in parallel to the standards work, and both
streams
 inspiring each other, will be fun :-)


 Regards,
 Thomas


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Resource dependencies

2014-03-21 Thread Thomas Spatzier
Shaunak Kashyap shaunak.kash...@rackspace.com wrote on 21/03/2014
05:26:50:

 From: Shaunak Kashyap shaunak.kash...@rackspace.com
 To: openstack-dev@lists.openstack.org
openstack-dev@lists.openstack.org
 Date: 21/03/2014 05:29
 Subject: [openstack-dev] [Heat] Resource dependencies

 Hi,

 In a Heat template, what does it mean for a resource to depend on
 another resource? As in, what is the impact of creating a dependency?

When a resource depends on another resource, this means that the Heat
engine will only start processing this resource as soon as the other
resource it depends on has been created. If a resource depends on multiple
resources, all those other resources have to be created before processing
the depedent resource.


 I read http://docs.openstack.org/developer/heat/template_guide/
 hot_spec.html#resources-section and found this definition of the
 “depends_on” attribute:

  This optional attribute allows for specifying dependencies of the
 current resource on one or more other resources. Please refer to
 section hot_spec_resources_dependencies for details.


 Unfortunately, I can’t seem to find the referenced
 “hot_spec_resources_dependencies” section anywhere.

I just checked the source in github and the section is there:

https://github.com/openstack/heat/blob/master/doc/source/template_guide/hot_spec.rst#L452

It only looks like the wrong heading markup is used. Nice spotting
actually; I will fix it.


 Thank you,

 Shaunak
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat]Heat use as a standalone component for Cloud Managment over multi IAAS

2014-03-20 Thread Thomas Spatzier
Just out of curiosity: what is the purpose of project warm? From the wiki
page and the sample it looks pretty much like what Heat is doing.
And warm is almost HOT so could you imagine your use cases can just be
addressed by Heat using HOT templates?

Regards,
Thomas

sahid sahid.ferdja...@cloudwatt.com wrote on 18/03/2014 12:56:47:

 From: sahid sahid.ferdja...@cloudwatt.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 18/03/2014 12:59
 Subject: Re: [openstack-dev] [Heat]Heat use as a standalone
 component for Cloud Managment over multi IAAS

 Sorry for the late of this response,

 I'm currently working on a project called Warm.
 https://wiki.openstack.org/wiki/Warm

 It is used as a standalone client and try to deploy small OpenStack
 environments from Yzml templates. You can find some samples here:
 https://github.com/sahid/warm-templates

 s.

 - Original Message -
 From: Charles Walker charles.walker...@gmail.com
 To: openstack-dev@lists.openstack.org
 Sent: Wednesday, February 26, 2014 2:47:44 PM
 Subject: [openstack-dev] [Heat]Heat use as a standalone component
 for Cloud Managment over multi IAAS

 Hi,


 I am trying to deploy the proprietary application made in my company on
the
 cloud. The pre requisite for this is to have a IAAS which can be either a
 public cloud or private cloud (openstack is an option for a private
IAAS).


 The first prototype I made was based on a homemade python orchestrator
and
 apache libCloud to interact with IAAS (AWS and Rackspace and GCE).

 The orchestrator part is a python code reading a template file which
 contains the info needed to deploy my application. This template file
 indicates the number of VM and the scripts associated to each VM type to
 install it.


 Now I was trying to have a look on existing open source tool to do the
 orchestration part. I find JUJU (https://juju.ubuntu.com/) or HEAT (
 https://wiki.openstack.org/wiki/Heat).

 I am investigating deeper HEAT and also had a look on
 https://wiki.openstack.org/wiki/Heat/DSL which mentioned:

 *Cloud Service Provider* - A service entity offering hosted cloud
services
 on OpenStack or another cloud technology. Also known as a Vendor.


 I think HEAT as its actual version will not match my requirement but I
have
 the feeling that it is going to evolve and could cover my needs.


 I would like to know if it would be possible to use HEAT as a standalone
 component in the future (without Nova and other Ostack modules)? The goal
 would be to deploy an application from a template file on multiple cloud
 service (like AWS, GCE).


 Any feedback from people working on HEAT could help me.


 Thanks, Charles.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-19 Thread Thomas Spatzier
Excerpts from Zane Bitter's message on 19/03/2014 18:18:34:

 From: Zane Bitter zbit...@redhat.com
 To: openstack-dev@lists.openstack.org
 Date: 19/03/2014 18:21
 Subject: Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

 On 19/03/14 05:00, Stan Lagun wrote:
  Steven,
 
  Agree with your opinion on HOT expansion. I see that inclusion of
  imperative workflows and ALM would require major Heat redesign and
  probably would be impossible without loosing compatibility with
previous
  HOT syntax. It would blur Heat mission, confuse current users and rise
a
  lot of questions what should and what should not be in Heat. Thats why
  we chose to built a system on top of Heat rather then expending HOT.

 +1, I agree (as we have discussed before) that it would be a mistake to
 shoehorn workflow stuff into Heat. I do think we should implement the
 hooks I mentioned at the start of this thread to allow tighter
 integration between Heat and a workflow engine (i.e. Mistral).

+1 on not putting workflow stuff into Heat. Rather let's come up with a
nice way of Heat and a workflow service to work together.
That could be done in two ways: (1) let Heat hand off to a workflow service
for certains tasks or (2) let people define workflow tasks that can easily
work on Heat deployed resources. Maybe both make sense, but right now I am
more leaning towards (2).


 So building a system on top of Heat is good. Building it on top of
 Mistral as well would also be good, and that was part of the feedback
 from the TC.

 To me, building on top means building on top of the languages (which
 users will have to invest a lot of work in learning) as well, rather
 than having a completely different language and only using the
 underlying implementation(s).

That all sounds logical to me and would keep things clean, i.e. keep the
HOT language clean by not mixing it with imperative expression, and keep
the Heat engine clean by not blowing it up to act as a workflow engine.

When I think about the two aspects that are being brought up in this thread
(declarative description of a desired state and workflows) my thinking is
that much (and actually as much as possible) can be done declaratively the
way Heat does it with HOT. Then for bigger lifecycle management there will
be a need for additional workflows on top, because at some point it will be
hard to express management logic declaratively in a topology model.
Those additional flows on-top will have to be aware of the instance created
from a declarative template (i.e. a Heat stack) because it needs to act on
the respective resources to do something in addition.

So when thinking about a domain specific workflow language, it should be
possible to define tasks (in a template aware manner) like on resource XYZ
of the template, do something, or update resource XYZ of the template
with this state, then do this etc. At runtime this would resolve to the
actual resource instances created from the resource templates. Making such
constructs available to the workflow authors would make sense. Having a
workflow service able to execute this via the right underlying APIs would
be the execution part. I think from an instance API perspective, Heat
already brings a lot for this with the stack model, so workflow tasks could
be written to use the stack API to access instance information. Things like
update of resources is also something that is already there.

BTW, we have a similar concept (or are working on a refinement of it based
on latest discussions) in TOSCA and call it the plan portability API,
i.e. an API that a declarative engine would expose so that fit-for-purpose
workflow tasks can be defined on-top.

Regards,
Thomas



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Murano][TOSCA] Murano team contrib. to Heat TOSCA activities

2014-03-11 Thread Thomas Spatzier
Randall Burt randall.b...@rackspace.com wrote on 10/03/2014 19:51:58:

 From: Randall Burt randall.b...@rackspace.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 10/03/2014 19:55
 Subject: Re: [openstack-dev] [Heat][Murano][TOSCA] Murano team
 contrib. to Heat TOSCA activities


 On Mar 10, 2014, at 1:26 PM, Georgy Okrokvertskhov
 gokrokvertsk...@mirantis.com
  wrote:

  Hi,
 
  Thomas and Zane initiated a good discussion about Murano DSL and
 TOSCA initiatives in Heat. I think will be beneficial for both teams
 to contribute into TOSCA.

 Wasn't TOSCA developing a simplified version in order to converge with
HOT?

Right, we are currently developing a simple profile of TOSCA (basically a
subset and cleanup of the v1.0 full feature set) and a YAML rendering for
that simple profile. We are working on aligning this as best as we can with
HOT, but there will be some differences. E.g. there will be additional
elements in TOSCA YAML that are not present in HOT (at least today). We
will be able to translate the topology portion of TOSCA models into HOT via
the heat-translator that Sahdev has kicked off. Over time, I could see some
of the advanced features we only have in TOSCA YAML today to be adopted
by HOT, but let's see what makes sense step by step. I coul also well
imagine TOSCA YAML as a layer for a portable format above HOT that gets
bound to plain HOT and/or other constructs (Mistral, Murano ...) during
deployment.

Note that TOSCA will continue to be a combination of a declarative model
(topology) and an imperative model (workflows). The imperative model is
optional, so if you don't require special flows for an application you can
just go with the declarative approach.
The imperative part could be passed to e.g. Mistral (or Murano?), i.e.
TOSCA having the concept of workflows (we call them plans) does not
necessarily mean to pull this into Heat, but to distribute work to
different components in the orchestration program.


  While Mirantis is working on organizational part for OASIS. I
 would like to understand what is the current view on the TOSCA and
 HOT relations.
  It looks like TOSCA can cover all aspects of declarative
 components HOT templates and imperative workflows which can be
 covered by Murano. What do you think about that?

I'm looking forward to having you join the TOSCA work and contribute your
experience!


 Aren't workflows covered by Mistral? How would this be different
 than including mistral support in Heat?

See my comment above: I don't see the concept of flows in a model like
TOSCA require us pushing workflows into Heat, but we could just push one
portion (declarative model) to Heat and the other part to Mistral and find
a way that the flows can access e.g. stack information in Heat.


  I think TOSCA format can be used a a descriptions of Applications
 and heat-translator can actually convert TOSCA descriptions to both
 HOT and Murano files which can be then used for actual Application
 deployment. Both Het and Murano workflows can coexist in
 Orchestration program and cover both declarative templates and
 imperative workflows use cases.
 
  --
  Georgy Okrokvertskhov
  Architect,
  OpenStack Platform Products,
  Mirantis
  http://www.mirantis.com
  Tel. +1 650 963 9828
  Mob. +1 650 996 3284
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request: Murano

2014-03-05 Thread Thomas Spatzier
Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote on 05/03/2014
00:32:08:

 From: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 05/03/2014 00:34
 Subject: Re: [openstack-dev] Incubation Request: Murano

 Hi Thomas, Zane,

 Thank you for bringing TOSCA to the discussion. I think this is
 important topic as it will help to find better alignment or even
 future merge of Murano DSL and Heat templates. Murano DSL uses YAML
 representation too, so we can easily merge use constructions from
 Heat and probably any other YAML based TOSCA formats.

 I will be glad to join TOSCA TC. Is there any formal process for that?

The first part is that your company must be a member of OASIS. If that is
the case, I think you can simply go to the TC page [1] and click a button
to join the TC. If your company is not yet a member, you could get in touch
with the TC chairs Paul Lipton and Simon Moser and ask for the best next
steps. We recently had people from GigaSpaces join the TC, and since they
are also doing very TOSCA aligned implementation in Cloudify, their input
will probably help a lot to advance TOSCA.


 I also would like to use this opportunity and start conversation
 with Heat team about Heat roadmap and feature set. As Thomas
 mentioned in his previous e-mail TOSCA topology story is quite
 covered by HOT. At the same time there are entities like Plans which
 are covered by Murano. We had discussion about bringing workflows to
 Heat engine before HK summit and it looks like that Heat team has no
 plans to bring workflows into Heat. That is actually why we
 mentioned Orchestration program as a potential place for Murano DSL
 as Heat+Murano together will cover everything which is defined by TOSCA.

I remember the discussions about whether to bring workflows into Heat or
not. My personal opinion is that workflows are probably out of the scope of
Heat (i.e. everything but the derived orchestration flows the Heat engine
implements). So there could well be a layer on-top of Heat that lets Heat
deal with all topology-related declarative business and adds workflow-based
orchestration around it. TOSCA could be a way to describe the respective
overarching models and then hand the different processing tasks to the
right engine to deal with it.


 I think TOSCA initiative can be a great place to collaborate. I
 think it will be possible then to use Simplified TOSCA format for
 Application descriptions as TOSCA is intended to provide such
descriptions.

 Is there a team who are driving TOSCA implementation in OpenStack
 community? I feel that such team is necessary.

We started to implement a TOSCA YAML to HOT converter and our team member
Sahdev (IRC spzala) has recently submitted code for a new stackforge
project [2]. This is very initial, but could be a point to collaborate.

[1] https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=tosca
[2] https://github.com/stackforge/heat-translator

Regards,
Thomas


 Thanks
 Georgy


 On Tue, Mar 4, 2014 at 2:36 PM, Thomas Spatzier
thomas.spatz...@de.ibm.com
  wrote:
 Excerpt from Zane Bitter's message on 04/03/2014 23:16:21:
  From: Zane Bitter zbit...@redhat.com
  To: openstack-dev@lists.openstack.org
  Date: 04/03/2014 23:20
  Subject: Re: [openstack-dev] Incubation Request: Murano
 
  On 04/03/14 00:04, Georgy Okrokvertskhov wrote:
  
  It so happens that the OASIS's TOSCA technical committee are working as
  we speak on a TOSCA Simple Profile that will hopefully make things
  easier to use and includes a YAML representation (the latter is great
  IMHO, but the key to being able to do it is the former). Work is still
  at a relatively early stage and in my experience they are very much
open
  to input from implementers.

 Nice, I was probably also writing a mail with this information at about
the
 same time :-)
 And yes, we are very much interested in feedback from implementers and
open
 to suggestions. If we can find gaps and fill them with good proposals,
now
 is the right time.

 
  I would strongly encourage you to get involved in this effort (by
  joining the TOSCA TC), and also to architect Murano in such a way that
  it can accept input in multiple formats (this is something we are
making
  good progress toward in Heat). Ideally the DSL format for Murano+Heat
  should be a trivial translation away from the relevant parts of the
YAML
  representation of TOSCA Simple Profile.

 Right, having a straight-forward translation would be really desirable.
The
 way to get there can actually be two-fold: (1) any feedback we get from
the
 Murano folks on the TOSCA simple profile and YAML can help us to make
TOSCA
 capable of addressing the right use cases, and (2) on the other hand make
 sure the implementation goes in a direction that is in line with what
TOSCA
 YAML will look like.

 
  cheers,
  Zane

Re: [openstack-dev] Incubation Request: Murano

2014-03-05 Thread Thomas Spatzier
Hi Stan,

thanks for sharing your thoughts about Murano and relation to TOSCA. I have
added a few comments below.

 From: Stan Lagun sla...@mirantis.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 05/03/2014 00:51
 Subject: Re: [openstack-dev] Incubation Request: Murano

 Hi all,

 Completely agree with Zane. Collaboration with TOSCA TC is a way to
 go as Murano is very close to TOSCA. Like Murano = 0.9 * TOSCA + UI
 + OpenStack services integration.

 Let me share my thoughts on TOSCA as I read all TOSCA docs and I'm
 also the author of initial Murano DSL design proposal so I can
 probably compare them.

 We initially considered to just implement TOSCA before going with
 own DSL. There was no YAML TOSCA out there at that time, just XML
version.

 So here's why we've wrote our own DSL:

 1. TOSCA is very complex and verbose. Considering there is no
 production-ready tooling for TOSCA users would have to type all
 those tons of XML tags and namespaces and TOSCA XMLs are really hard
 to read and write. No one gonna do this, especially outside of Java-
 enterprise world

Right, that's why we are doing the simple profile and YAML work now to
overcome those adoption issues.

 2. TOSCA has no workflow language. TOSCA draft states that the
 language is indeed needed and recommends using BPEL or BPMN for that
matter.

Right, the goal of TOSCA was not to define a new workflow language but to
refer to existing ones. This does not mean, of course that other languages
than BPEL or BPMN cannot be used. We still consider standardization of such
a language out of scope of the TC, but if there is some widely adopted flow
language being implemented, e.g. in the course of Murano, I could imagine
that a community could use such a simpler language in an OpenStack
environment. Ideally though, such a simpler workflow language would be
translatable to a standard language like BPMN (or a subset of it) so those
who have a real process engine can consume the flow descriptions.

 Earlier versions of Murano showed that some sort of workflow
 language (declarative, imperative whatever) if absolutely required
 for non-trivial cases. If you don't have workflow language then you
 have to hard-code a lot of knowledge into engine in Python. But the
 whole idea of AppCatalog was that users upload (share) their
 application templates that contain application-specific maintenance/
 deployment code that is run in on common shared server (not in any
 particular VM) and thus capable of orchestrating all activities that
 are taking place on different VMs belonging to given application
 (for complex applications with typical enterprise SOA architecture).
 Besides VMs applications can talk to OpenStack services like Heat,
 Neutron, Trove and 3rd party services (DNS registration, NNTP,
 license activation service etc). Especially with the Heat so that
 application can have its VMs and other IaaS resources. There is a
 similar problem in Heat - you can express most of the basic things
 in HOT but once you need something really complex like accessing
 external API, custom load balancing or anything tricky you need to
 resort to Python and write custom resource plugin. And then you
 required to have root access to engine to install that plugin. This
 is not a solution for Murano as in Murano any user can upload
 application manifest at any time without affecting running system
 and without admin permissions.

 Now going back to TOSCA the problem with TOSCA workflows is they are
 not part of standard. There is no standardized way how BPEL would
 access TOSCA attributes and how 2 systems need to interact. This
 alone makes any 2 TOSCA implementations incompatible with each other
 rendering the whole idea of standard useless. It is not standard if
 there is no compatibility.

We have been working on what we call a plan portability API that
describes what APIs a TOSCA container has to support so that portable flows
can access topology information. During the v1.0 time frame, though, we
focused on the declarative part (i.e. the topology model). But, yes I agree
that this part needs to be done so that also plans get portable. If you are
having experience in this area, it would be great to collaborate and see if
we can feed your input into the TOSCA effort.

 And again BPEL is heavy XML language that you don't want to have in
 OpenStack. Trust me, I spent significant time studying it. And if
 there is YAML version of TOSCA that is much more readable than XML
 one there is no such thing for BPEL. And I'm not aware of any
 adequate replacement for it

I agree that BPEL and BPMN are very heavy and hard to use without tooling,
so no objection on looking at a lightweight alternative in the OpenStack
orchestration context.

 3. It seems like nobody really using TOSCA. TOSCA standard defines
 exact TOSCA package format. TOSCA was designed so that people can
 share those packages (CSARs as TOSCA calls 

Re: [openstack-dev] Incubation Request: Murano

2014-03-05 Thread Thomas Spatzier
Forgot to provide the email addresses of Paul and Simon in my last mail:

paul.lip...@ca.com
smo...@de.ibm.com

Regards,
Thomas

 From: Thomas Spatzier/Germany/IBM@IBMDE
 To: OpenStack Development Mailing List \(not for usage questions\)
 openstack-dev@lists.openstack.org
 Date: 05/03/2014 10:21
 Subject: Re: [openstack-dev] Incubation Request: Murano

 Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote on 05/03/2014
 00:32:08:

  From: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org
  Date: 05/03/2014 00:34
  Subject: Re: [openstack-dev] Incubation Request: Murano
 
  Hi Thomas, Zane,
 
  Thank you for bringing TOSCA to the discussion. I think this is
  important topic as it will help to find better alignment or even
  future merge of Murano DSL and Heat templates. Murano DSL uses YAML
  representation too, so we can easily merge use constructions from
  Heat and probably any other YAML based TOSCA formats.
 
  I will be glad to join TOSCA TC. Is there any formal process for that?

 The first part is that your company must be a member of OASIS. If that is
 the case, I think you can simply go to the TC page [1] and click a button
 to join the TC. If your company is not yet a member, you could get in
touch
 with the TC chairs Paul Lipton and Simon Moser and ask for the best next
 steps. We recently had people from GigaSpaces join the TC, and since they
 are also doing very TOSCA aligned implementation in Cloudify, their input
 will probably help a lot to advance TOSCA.

 
  I also would like to use this opportunity and start conversation
  with Heat team about Heat roadmap and feature set. As Thomas
  mentioned in his previous e-mail TOSCA topology story is quite
  covered by HOT. At the same time there are entities like Plans which
  are covered by Murano. We had discussion about bringing workflows to
  Heat engine before HK summit and it looks like that Heat team has no
  plans to bring workflows into Heat. That is actually why we
  mentioned Orchestration program as a potential place for Murano DSL
  as Heat+Murano together will cover everything which is defined by
TOSCA.

 I remember the discussions about whether to bring workflows into Heat or
 not. My personal opinion is that workflows are probably out of the scope
of
 Heat (i.e. everything but the derived orchestration flows the Heat engine
 implements). So there could well be a layer on-top of Heat that lets Heat
 deal with all topology-related declarative business and adds
workflow-based
 orchestration around it. TOSCA could be a way to describe the respective
 overarching models and then hand the different processing tasks to the
 right engine to deal with it.

 
  I think TOSCA initiative can be a great place to collaborate. I
  think it will be possible then to use Simplified TOSCA format for
  Application descriptions as TOSCA is intended to provide such
 descriptions.
 
  Is there a team who are driving TOSCA implementation in OpenStack
  community? I feel that such team is necessary.

 We started to implement a TOSCA YAML to HOT converter and our team member
 Sahdev (IRC spzala) has recently submitted code for a new stackforge
 project [2]. This is very initial, but could be a point to collaborate.

 [1] https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=tosca
 [2] https://github.com/stackforge/heat-translator

 Regards,
 Thomas

 
  Thanks
  Georgy
 

  On Tue, Mar 4, 2014 at 2:36 PM, Thomas Spatzier
 thomas.spatz...@de.ibm.com
   wrote:
  Excerpt from Zane Bitter's message on 04/03/2014 23:16:21:
   From: Zane Bitter zbit...@redhat.com
   To: openstack-dev@lists.openstack.org
   Date: 04/03/2014 23:20
   Subject: Re: [openstack-dev] Incubation Request: Murano
  
   On 04/03/14 00:04, Georgy Okrokvertskhov wrote:
   
   It so happens that the OASIS's TOSCA technical committee are working
as
   we speak on a TOSCA Simple Profile that will hopefully make things
   easier to use and includes a YAML representation (the latter is great
   IMHO, but the key to being able to do it is the former). Work is
still
   at a relatively early stage and in my experience they are very much
 open
   to input from implementers.

  Nice, I was probably also writing a mail with this information at about
 the
  same time :-)
  And yes, we are very much interested in feedback from implementers and
 open
  to suggestions. If we can find gaps and fill them with good proposals,
 now
  is the right time.
 
  
   I would strongly encourage you to get involved in this effort (by
   joining the TOSCA TC), and also to architect Murano in such a way
that
   it can accept input in multiple formats (this is something we are
 making
   good progress toward in Heat). Ideally the DSL format for Murano+Heat
   should be a trivial translation away from the relevant parts of the
 YAML
   representation of TOSCA Simple Profile.

  Right, having a straight

Re: [openstack-dev] Incubation Request: Murano

2014-03-04 Thread Thomas Spatzier
Hi all,

I would like to pick up the TOSCA topic brought up by Zane in his mail
below.

TOSCA is in fact a standard that seems closely aligned with the concepts
that Murano is implementing, so thanks Zane for bringing up this
discussion. I saw Georgy's reply early today where he stated that Murano is
actually heavily inspired by TOSCA, but Murano took a different path due to
some drawbacks in TOSCA v1.0 (e.g. XML).

I would like to point out, though, that we (TOSCA TC) are heavily working
on fixing some of the usability issues that TOSCA v1.0 has. The most
important point being that we are working on a YAML rendering, along with a
simplified profile of TOSCA, which both shall make it easier and more
attractive to use TOSCA. Much of this work has actually been inspired by
the collaboration with the Heat community and the development of the HOT
language.

That said, I would really like the Murano folks to have a look at a current
working draft of the TOSCA Simple Profile in YAML which can be found at
[1]. It would be nice to get some feedback, and ideally we could even
collaborate to see if we can come up with a common solution that fits
everyone's needs. As far as topologies are concerned, we are trying to get
TOSCA YAML and HOT well aligned so we can have an easy mapping. Sahdev from
our team (IRC spzala) is actually working on a TOSCA YAML to HOT converter
which he recently put on stackforge (initial version only). With Murano it
would be interesting to see if we could collaborate on the plans side of
TOSCA.

Apart from pure DSL work, I think Murano has some other interesting items
that are also interesting from a TOSCA perspective. For example, I read
about a catalog that stores artifacts needed for app deployments. TOSCA
also has the concept of artifacts, and we have a packaging format to
transport a model and its associated artifacts. So if at some point we
start thinking about importing such a TOSCA archive into a layer above
today's Heat, the question is if we could use e.g. the Murano catalog for
storing all content.

All that said, I see some good opportunities for collaboration and it would
be nice to find a common solution with good alignment between projects and
to avoid duplicate efforts.

BTW, Georgy, I am impressed how closely you looked at the TOSCA spec and
the charter :-)

[1]
https://www.oasis-open.org/committees/document.php?document_id=52381wg_abbrev=tosca

Greetings,
Thomas

Zane Bitter zbit...@redhat.com wrote on 04/03/2014 03:33:01:

 From: Zane Bitter zbit...@redhat.com
 To: openstack-dev@lists.openstack.org
 Date: 04/03/2014 03:32
 Subject: Re: [openstack-dev] Incubation Request: Murano

 On 25/02/14 05:08, Thierry Carrez wrote:
  The second challenge is that we only started to explore the space of
  workload lifecycle management, with what looks like slightly
overlapping
  solutions (Heat, Murano, Solum, and the openstack-compatible PaaS
  options out there), and it might be difficult, or too early, to pick a
  winning complementary set.

 I'd also like to add that there is already a codified OASIS standard
 (TOSCA) that covers application definition at what appears to be a
 similar level to Murano. Basically it's a more abstract version of what
 Heat does plus workflows for various parts of the lifecycle (e.g.
backup).

 Heat and TOSCA folks have been working together since around the time of
 the Havana design summit with the aim of eventually getting a solution
 for launching TOSCA applications on OpenStack. Nothing is set in stone
 yet, but I would like to hear from the Murano folks how they are
 factoring compatibility with existing standards into their plans.

 cheers,
 Zane.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request: Murano

2014-03-04 Thread Thomas Spatzier
Excerpt from Zane Bitter's message on 04/03/2014 23:16:21:
 From: Zane Bitter zbit...@redhat.com
 To: openstack-dev@lists.openstack.org
 Date: 04/03/2014 23:20
 Subject: Re: [openstack-dev] Incubation Request: Murano

 On 04/03/14 00:04, Georgy Okrokvertskhov wrote:
 
 It so happens that the OASIS's TOSCA technical committee are working as
 we speak on a TOSCA Simple Profile that will hopefully make things
 easier to use and includes a YAML representation (the latter is great
 IMHO, but the key to being able to do it is the former). Work is still
 at a relatively early stage and in my experience they are very much open
 to input from implementers.

Nice, I was probably also writing a mail with this information at about the
same time :-)
And yes, we are very much interested in feedback from implementers and open
to suggestions. If we can find gaps and fill them with good proposals, now
is the right time.


 I would strongly encourage you to get involved in this effort (by
 joining the TOSCA TC), and also to architect Murano in such a way that
 it can accept input in multiple formats (this is something we are making
 good progress toward in Heat). Ideally the DSL format for Murano+Heat
 should be a trivial translation away from the relevant parts of the YAML
 representation of TOSCA Simple Profile.

Right, having a straight-forward translation would be really desirable. The
way to get there can actually be two-fold: (1) any feedback we get from the
Murano folks on the TOSCA simple profile and YAML can help us to make TOSCA
capable of addressing the right use cases, and (2) on the other hand make
sure the implementation goes in a direction that is in line with what TOSCA
YAML will look like.


 cheers,
 Zane.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Need more sample HOT templates for users

2014-02-13 Thread Thomas Spatzier
Hi Qiming,

not sure if you have already seen it, but there is some documentation
available at the following locations. If you already know it, sorry for
dup ;-)

Entry to Heat documentation:
http://docs.openstack.org/developer/heat/

Template Guide with pointers to more details like documentation of all
resources:
http://docs.openstack.org/developer/heat/template_guide/index.html

HOT template guide:
http://docs.openstack.org/developer/heat/template_guide/hot_guide.html

HOT template spec:
http://docs.openstack.org/developer/heat/template_guide/hot_spec.html

Regards,
Thomas

Qiming Teng teng...@linux.vnet.ibm.com wrote on 14/02/2014 06:55:56:

 From: Qiming Teng teng...@linux.vnet.ibm.com
 To: openstack-dev@lists.openstack.org
 Date: 14/02/2014 07:04
 Subject: [openstack-dev] [Heat] Need more sample HOT templates for users

 Hi,

   I have been recently trying to convince some co-workers and even some
   customers to try deploy and manipulate their applications using Heat.

   Here are some feedbacks I got from them, which could be noteworthy for
   the Heat team, hopefully.

   - No document can be found on how each Resource is supposed to be
 used. This is partly solved by the adding resource_schema API but it
 seems not yet exposed by heatclient thus the CLI.

 In addition to this, resource schema itself may print only simple
 help message in ONE sentence, which could be insufficient for users
 to gain a full understanding.

   - The current 'heat-templates' project provides quite some samples in
 the CFN format, but not so many in HOT format.  For early users,
 this means either they will get more accustomed to CFN templates, or
 they need to write HOT templates from scratch.

 Another suggestion is also related to Resource usage. Maybe more
 smaller HOT templates each focusing on teaching one or two resources
 would be helpful. There could be some complex samples as show cases
 as well.

  Some thoughts on documenting the Resources:

   - The doc can be inlined in the source file, where a developer
 provides the manual of a resource when it is developed. People won't
 forget to update it if the implementation is changed. A Resource can
 provide a 'describe' or 'usage' or 'help' method to be inherited and
 implemented by all resource types.

 One problem with this is that code mixed with long help text may be
 annoying for some developers.  Another problem is about
 internationalization.

   - Another option is to create a subdirectory in the doc directory,
 dedicated to resource usage. In addition to the API references, we
 also provide resource references (think of the AWS CFN online docs).

   Does this makes senses?

 Regards,
   - Qiming

 -
 Qiming Teng, PhD.
 Research Staff Member
 IBM Research - China
 e-mail: teng...@cn.ibm.com


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] in-instance update hooks

2014-02-11 Thread Thomas Spatzier
Hi Clint,

thanks for writing this down. This is a really interesting use case and
feature, also in relation to what was recently discussed on rolling
updates.

I have a couple of thoughts and questions:

1) The overall idea seems clear to me but I have problems understanding the
detailed flow and relation to template definitions and metadata. E.g. in
addition to the examples you gave in the linked etherpad, where would the
script or whatever sit that handles the update etc.

2) I am not a big fan of CFN WaitConditions since they let too much
programming shine thru in a template. So I wonder whether this could be
made more transparent to the template writer. The underlying mechanism
could still be the same, but maybe we could make the template look cleaner.
For example, what Steve Baker is doing for software orchestration also uses
the underlying mechanisms but does not expose WaitConditions in templates.

3) Has the issue of how to express update policies on the rolling updates
thread been resolved? I followed that thread but seems like there has not
been a final decision. The reason I am bringing this up is because I think
this is related. You are suggesting to establish a new top-level section
'action_hooks' in a resource. Rendering this top-level in the resource is a
good thing IMO. However, since this is related to updates in a way (you
want to react to any kind of update event to the resource's state), I
wonder if those hooks could be attributes of an update policy. UpdatePolicy
in CFN is also a top-level section in a resource and they seem to provide a
default one like the following (I am writing this in snake case as we would
render it in HOT:

resources:
  autoscaling_group1:
type: AWS::AutoScaling::AutoScalingGroup
properties:
  # the properties ...
update_policy:
  auto_scaling_rolling_update:
min_instances_in_server: 1
max_batch_size: 1
pause_time: PT12M5S

(I took this from the CFN user guide).
I.e. an update policy already is a complex data structure, and we could
define additional types that include the resource hooks definitions you
need. ... I don't fully understand the connection between 'actions' and
'path' in your etherpad example yet, so cannot define a concrete example,
but I hope you get what I wanted to express.

4) What kind of additional metadata for the update events are you thinking
about? For example, in case this is done in an update case with a batch
size of  1 (i.e. you update multiple members in a cluster at a time) -
unless I put too much interpretation in here concerning the relation to
rolling updates - you would probably want to tell the server a black list
of servers to which it should not migrate workload, because they will be
taken down as well.


As I said, just a couple of thoughts, and maybe for some I am just
mis-understanding some details.
Anyway, I would be interested in your view.

Regards,
Thomas


Clint Byrum cl...@fewbar.com wrote on 11/02/2014 06:22:54:

 From: Clint Byrum cl...@fewbar.com
 To: openstack-dev openstack-dev@lists.openstack.org
 Date: 11/02/2014 06:30
 Subject: [openstack-dev] [Heat] in-instance update hooks

 Hi, so in the previous thread about rolling updates it became clear that
 having in-instance control over updates is a more fundamental idea than
 I had previously believed. During an update, Heat does things to servers
 that may interrupt the server's purpose, and that may cause it to fail
 subsequent things in the graph.

 Specifically, in TripleO we have compute nodes that we are managing.
 Before rebooting a machine, we want to have a chance to live-migrate
 workloads if possible, or evacuate in the simpler case, before the node
 is rebooted. Also in the case of a Galera DB where we may even be running
 degraded, we want to ensure that we have quorum before proceeding.

 I've filed a blueprint for this functionality:

 https://blueprints.launchpad.net/heat/+spec/update-hooks

 I've cobbled together a spec here, and I would very much welcome
 edits/comments/etc:

 https://etherpad.openstack.org/p/heat-update-hooks

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [TripleO] Rolling updates spec re-written. RFC

2014-02-04 Thread Thomas Spatzier
Thomas Herve thomas.he...@enovance.com wrote on 03/02/2014 21:46:05:
 From: Thomas Herve thomas.he...@enovance.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 03/02/2014 21:52
 Subject: Re: [openstack-dev] [Heat] [TripleO] Rolling updates spec
 re-written. RFC

  So, I wrote the original rolling updates spec about a year ago, and the
  time has come to get serious about implementation. I went through it
and
  basically rewrote the entire thing to reflect the knowledge I have
  gained from a year of working with Heat.
 
  Any and all comments are welcome. I intend to start implementation very
  soon, as this is an important component of the HA story for TripleO:
 
  https://wiki.openstack.org/wiki/Heat/Blueprints/RollingUpdates

 Hi Clint, thanks for pushing this.

 First, I don't think RollingUpdatePattern and CanaryUpdatePattern
 should be 2 different entities. The second just looks like a
 parametrization of the first (growth_factor=1?).

 I then feel that using (abusing?) depends_on for update pattern is a
 bit weird. Maybe I'm influenced by the CFN design, but the separate
 UpdatePolicy attribute feels better (although I would probably use a
 property). I guess my main question is around the meaning of using
 the update pattern on a server instance. I think I see what you want
 to do for the group, where child_updating would return a number, but
 I have no idea what it means for a single resource. Could you detail
 the operation a bit more in the document?

I also think that the depends_on feels a bit weird. In most use cases
depends_on is more about waiting for some other resource to be ready, but
for rolling updates the resource if more a data container (a policy) only
that is just there - that's at least how I understand it from a user's
perspective. So refering to that resource via a special property would look
more intuitive to me.

That would also be in line with other cases already implemented: an
InstanceGroup that points to its LaunchConfiguration; a SoftwareDeployment
that points to a SoftwareConfiguration.


 It also seems that the interface you're creating (child_creating/
 child_updating) is fairly specific to your use case. For autoscaling
 we have a need for more generic notification system, it would be
 nice to find common grounds. Maybe we can invert the relationship?
 Add a notified_resources attribute, which would call hooks on the
 parent when actions are happening.

 Thanks,

 --
 Thomas

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] About LaunchConfiguration and Autoscaling

2014-01-30 Thread Thomas Spatzier
Hi Thomas,

I haven't looked at the details of the autoscaling design for a while, but
the first option looks more intuitive to me.
It seems to cover the same content as LaunchConfiguration, but is it
generic and therefore would provide for one common approach for all kinds
of resources.

Regards,
Thomas

Thomas Herve thomas.he...@enovance.com wrote on 30/01/2014 12:01:38:
 From: Thomas Herve thomas.he...@enovance.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 30/01/2014 12:06
 Subject: [openstack-dev] [Heat] About LaunchConfiguration and Autoscaling

 Hi all,

 While talking to Zane yesterday, he raised an interesting question
 about whether or not we want to keep a LaunchConfiguration object
 for the native autoscaling resources.

 The LaunchConfiguration object basically holds properties to be able
 to fire new servers in a scaling group. In the new design, we will
 be able to start arbitrary resources, so we can't keep a strict
 LaunchConfiguration object as it exists, as we can have arbitrary
properties.

 It may be still be interesting to store it separately to be able to
 reuse it between groups.

 So either we do this:

 group:
   type: OS::Heat::ScalingGroup
   properties:
 scaled_resource: OS::Nova::Server
 resource_properties:
   image: my_image
   flavor: m1.large

 Or:

 group:
   type: OS::Heat::ScalingGroup
   properties:
 scaled_resource: OS::Nova::Server
 launch_configuration: server_config
 server_config:
   type: OS::Heat::LaunchConfiguration
   properties:
 image: my_image
 flavor: m1.large

 (Not sure we can actually define dynamic properties, in which case
 it'd be behind a top property.)

 Thoughts?

 --
 Thomas

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][horizon]Heat UI related requirements roadmap

2013-11-27 Thread Thomas Spatzier
Excerpts from Tim Schnell's message on 27.11.2013 00:44:04:
 From: Tim Schnell tim.schn...@rackspace.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date: 27.11.2013 00:47
 Subject: Re: [openstack-dev] [heat][horizon]Heat UI related
 requirements  roadmap



snip

 That is not the use case that I'm attempting to make, let me try again.
 For what it's worth I agree, that in this use case I want a mechanism to
 tag particular versions of templates your solution makes sense and will
 probably be necessary as the requirements for the template catalog start
 to become defined.

 What I am attempting to explain is actually much simpler than that. There
 are 2 times in a UI that I would be interested in the keywords of the
 template. When I am initially browsing the catalog to create a new stack,
 I expect the stacks to be searchable and/or organized by these keywords
 AND when I am viewing the stack-list page I should be able to sort my
 existing stacks by keywords.

 In this second case I am suggesting that the end-user, not the Template
 Catalog Moderator should have control over the keywords that are defined
 in his instantiated stack. So if he does a Stack Update, he is not
 committing a new version of the template back to a git repository, he is
 just updating the definition of the stack. If the user decides that the
 original template defined the keyword as wordpress and he wants to
 revise the keyword to tims wordpress then he can do that without the
 original template moderator knowing or caring about it.

 This could be useful to an end-user who's business is managing client
 stacks on one account maybe. So he could have tims wordpress, tims
 drupal, angus wordpress, angus drupal the way that he updates the
 keywords after the stack has been instantiated is up to him. Then he can
 sort or search his stacks on his own custom keywords.


For me this all sounds like really user specific tagging, so something that
should really be done outside the template file itself in the template
catalog service. The use case seems about a role that organizes templates
(or later stacks) by some means, which is fine, but then everything is a
decision of the person organizing the templates, and not necessarily a
decision of the template author. So it would be cleaner to keep this
tagging outside the template IMO.

 I agree that the template versioning is a separate use case.

 Tim
 
 
 Tim
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][horizon]Heat UI related requirements roadmap

2013-11-27 Thread Thomas Spatzier
Excerpts from Steve Baker's message on 26.11.2013 23:29:06:

 From: Steve Baker sba...@redhat.com
 To: openstack-dev@lists.openstack.org,
 Date: 26.11.2013 23:32
 Subject: Re: [openstack-dev] [heat][horizon]Heat UI related
 requirements  roadmap

 On 11/27/2013 11:02 AM, Christopher Armstrong wrote:
 On Tue, Nov 26, 2013 at 3:24 PM, Tim Schnell tim.schn...@rackspace.com
  wrote:

snip

 Good point, I'd like to revise my previous parameter-groups example:
 parameter-groups:
 - name: db
   description: Database configuration options
   parameters: [db_name, db_username, db_password]
 - name: web_node
   description: Web server configuration
   parameters: [web_node_name, keypair]
 parameters:
   # as above, but without requiring any order or group attributes

+1 on this approach. Very clean and gives you all the information you would
need to the UI use case.
And as you say, does not have any impact on the current way parameters are
processed by Heat.


 Here, parameter-groups is a list which implies the order, and
 parameters are specified in the group as a list, so we get the order
 from that too. This means a new parameter-groups section which
 contains anything required to build a good parameters form, and no
 modifications required to the parameters section at all.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][horizon]Heat UI related requirements roadmap

2013-11-27 Thread Thomas Spatzier
Zane Bitter zbit...@redhat.com wrote on 27.11.2013 22:56:16:

 From: Zane Bitter zbit...@redhat.com
 To: openstack-dev@lists.openstack.org,
 Date: 27.11.2013 23:00
 Subject: Re: [openstack-dev] [heat][horizon]Heat UI related
 requirements  roadmap

 On 27/11/13 19:57, Jay Pipes wrote:
 
  Actually, even simpler than that...
 
  parameters:
 db:
  - db_name:
description: blah
help: blah
  - db_username:
description: blah
help: blah
 
  After all, can't we assume that if the parameter value is a list, then
  it is a group of parameters?

 This resolves to a fairly weird construct:

 {
parameters: {
  db: [
{
  db_name: null,
  description: blah,
  help: blah
},
{
  description: blah,
  db_username: null,
  help: blah
}
  ]
}
 }

 so the name is whichever key has a data value of null (what if there's
 more than one?), and obviously it can't collide with any keywords like
 description or help.

 Also, this orders parameters within a group (though not parameters that
 are ungrouped) but not the groups themselves.

Agree that this is kind of a strange construct. I would be more in favor of
(1) making parameters a list which bring implicit ordering (so is intuitive
to the user, and (2) adding the grouping as a separate thing.

E.g.

parameters:
  - db_user:
  type: string
  description: DB user name.
  - db_password:
  type: string
  description: DB password.
  - app_user:
  type: string
  description: Application user name.
  - app_password:
  type: string
  description: application password.

parameter_groups:
  - db: [ db_user, db_password ]
description: Database parameters.
  - app: [ app_user, app_password ]
description: Application parameters.



 cheers,
 Zane.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-21 Thread Thomas Spatzier
Steve Baker sba...@redhat.com wrote on 21.11.2013 21:19:07:
 From: Steve Baker sba...@redhat.com
 To: openstack-dev@lists.openstack.org,
 Date: 21.11.2013 21:25
 Subject: Re: [openstack-dev] [Heat] HOT software configuration
 refined after design summit discussions

 On 11/21/2013 08:48 PM, Thomas Spatzier wrote:
  Excerpts from Steve Baker's message on 21.11.2013 00:00:47:
  From: Steve Baker sba...@redhat.com
  To: openstack-dev@lists.openstack.org,
  Date: 21.11.2013 00:04
  Subject: Re: [openstack-dev] [Heat] HOT software configuration
  refined after design summit discussions
snip
  I thought about the name SoftwareApplier some more and while it is
clear
  what it does (it applies a software config to a server), the naming is
not
  really consistent with all the other resources in Heat. Every other
  resource type is called after the thing that you get when the template
gets
  instantiated (a Server, a FloatingIP, a VolumeAttachment etc). In
  case of SoftwareApplier what you actually get from a user perspective
is a
  deployed instance of the piece of software described be a
SoftwareConfig.
  Therefore, I was calling it SoftwareDeployment orignally, because you
get a
  software deployment (according to a config). Any comments on that name?
 SoftwareDeployment is a better name, apart from those 3 extra letters.
 I'll rename my POC.  Sorry nannj, you'll need to rename them back ;)

Ok, I'll change the name back in the wiki :-)


  If we think this thru with respect to remove-config (even though this
  needs more thought), a SoftwareApplier (that thing itself) would not
really
  go to state DELETE_IN_PROGRESS during an update. It is always there on
the
  VM but the software it deploys gets deleted and then reapplied or
  whatever ...
 
  Now thinking more about update scenarios (which we can leave for an
  iteration after the initial deployment is working), in my mental model
it
  would be more consistent to have information for handle_create,
  handle_delete, handle_update kinds of events all defined in the
  SoftwareConfig resource. SoftwareConfig for represent configuration
  information for one specific piece of software, e.g. a web server. So
it
  could provide all the information you need to install it, to uninstall
it,
  or to update its config. By updating the SoftwareApplier's (or
  SoftwareDeployment's - my preferred name) state at runtime, the
in-instance
  tools would grab the respective script of whatever an run it.
 
  So SoftwareConfig could look like:
 
  resources:
my_webserver_config:
  type: OS::Heat::SoftwareConfig
  properties:
http_port:
  type: number
# some more config props
 
config_create:
http://www.example.com/my_scripts/webserver/install.sh
config_delete:
  http://www.example.com/my_scripts/webserver/uninstall.sh
config_update:
  http://www.example.com/my_scripts/webserver/applyupdate.sh
 
 
  At runtime, when a SoftwareApplier gets created, it looks for the
  'config_create' hook and triggers that automation. When it gets
deleted, it
  looks for the 'config_delete' hook and so on. Only config_create is
  mandatory.
  I think that would also give us nice extensibility for future use
cases.
  For example, Heat today does not support something like stop-stack or
  start-stack which would be pretty useful though. If we have it one day,
we
  would just add a 'config_start' hook to the SoftwareConfig.
 
 
  [1]
 
https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-spec
  [2] https://blueprints.launchpad.net/heat/+spec/hot-software-config
 
 With the caveat that what we're discussing here is a future
enhancement...

 The problem I see with config_create/config_update/config_delete in a
 single SoftwareConfig is that we probably can't assume these 3 scripts
 consume the same inputs and produce the same outputs.

We could make it a convention that creators of software configs have to use
the same signature for the automation of create, delete etc. Or at least
input param names must be the same, while some pieces might take a subset
only. E.g. delete will probably take less inputs. This way we could have a
self-contained config.
As you said above, implementation-wise this is probably a future
enhancement, so once we have he config_create handling in place we could
just do a PoC patch on-top and try it out.


 Another option might be to have a separate confg/deployment pair for
 delete workloads, and a property on the deployment resource which states
 which phase the workload is executed in (create or delete).

Yes, this would be an option, but IMO a bit confusing for users. Especially
when I inspect a deployed stack, I would be wondering why there are many
SoftwareDeployment resources hanging around for the same piece of software
installed on a server.


 I'd like to think that special treatment for config_update won't be
 needed at all, since CM tools are supposed to be good at converging to
 whatever you

Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-20 Thread Thomas Spatzier
Steve Baker sba...@redhat.com wrote on 20.11.2013 09:51:34:
 From: Steve Baker sba...@redhat.com
 To: openstack-dev@lists.openstack.org,
 Date: 20.11.2013 09:55
 Subject: Re: [openstack-dev] [Heat] HOT software configuration
 refined after design summit discussions

 On 11/20/2013 09:29 PM, Clint Byrum wrote:
  Excerpts from Thomas Spatzier's message of 2013-11-19 23:35:40 -0800:
  Excerpts from Steve Baker's message on 19.11.2013 21:40:54:
  From: Steve Baker sba...@redhat.com
  To: openstack-dev@lists.openstack.org,
  Date: 19.11.2013 21:43
  Subject: Re: [openstack-dev] [Heat] HOT software configuration
  refined after design summit discussions
 
  snip
  I think there needs to a CM tool specific agent delivered to the
server
  which os-collect-config invokes. This agent will transform the config
  data (input values, CM script, CM specific specialness) to a CM tool
  invocation.
 
  How to define and deliver this agent is the challenge. Some options
are:
  1) install it as part of the image customization/bootstrapping
(golden
  images or cloud-init)
  2) define a (mustache?) template in the SoftwareConfig which
  os-collect-config transforms into the agent script, which
  os-collect-config then executes
  3) a CM tool specific implementation of SoftwareApplier builds and
  delivers a complete agent to os-collect-config which executes it
 
  I may be leaning towards 3) at the moment. Hopefully any agent can be
  generated with a sufficiently sophisticated base SoftwareApplier
type,
  plus maybe some richer intrinsic functions.
  This is good summary of options; about the same we had in mind. And we
were
  also leaning towards 3. Probably the approach we would take is to get
a
  SoftwareApplier running for one CM tool (e.g. Chef), then look at
another
  tool (base shell scripts), and then see what the generic parts art
that can
  be factored into a base class.
 
  The POC I'm working on is actually backed by a REST API which does
  dumb
  (but structured) storage of SoftwareConfig and SoftwareApplier
  entities.
  This has some interesting implications for managing SoftwareConfig
  resources outside the context of the stack which uses them, but
lets
  not
  worry too much about that *yet*.
  Sounds good. We are also defining some blueprints to break down the
  overall
  software config topic. We plan to share them later this week, and
then
  we
  can consolidate with your plans and see how we can best join forces.
 
 
  At this point it would be very helpful to spec out how specific CM
tools
  are invoked with given inputs, script, and CM tool specific options.
  That's our plan; and we would probably start with scripts and chef.
 
  Maybe if you start with shell scripts, cfn-init and chef then we can
all
  contribute other CM tools like os-config-applier, puppet, ansible,
  saltstack.
 
  Hopefully by then my POC will at least be able to create resources,
if
  not deliver some data to servers.
  We've been thinking about getting metadata to the in-instance parts on
the
  server and whether the resources you are building can serve the
purpose.
  I.e. pass and endpoint to the SoftwareConfig resources to the instance
and
  let the instance query the metadata from the resource. Sounds like
this is
  what you had in mind, so that would be a good point for integrating
the
  work. In the meantime, we can think of some shortcuts.
 
  Note that os-collect-config is intended to be a light-weight generic
  in-instance agent to do exactly this. Watch for Metadata changes, and
  feed them to an underlying tool in a predictable interface. I'd hope
  that any of the appliers would mostly just configure os-collect-config
  to run a wrapper that speaks os-collect-config's interface.
 
  The interface is defined in the README:
 
  https://pypi.python.org/pypi/os-collect-config
 
  It is inevitable that we will extend os-collect-config to be able to
  collect config data from whatever API these config applier resources
  make available. I would suggest then that we not all go off and
reinvent
  os-collect-config for each applier, but rather enhance
os-collect-config
  as needed and write wrappers for the other config tools which implement
  its interface.
 
  os-apply-config already understands this interface for obvious reasons.
 
  Bash scripts can use os-apply-config to extract individual values, as
  you might see in some of the os-refresh-config scripts that are run as
  part of tripleo. I don't think anything further is really needed there.
 
  For chef, some kind of ohai plugin to read os-collect-config's
collected
  data would make sense.

Thanks for all that information, Clint. Fully agree that we should leverage
what is already there instead of re-inventing the wheel.

 
 I'd definitely start with occ as Clint outlines. It would be nice if occ
 only had to be configured to poll metadata for the OS::Nova::Server to
 fetch the aggregated data for the currently available SoftwareAppliers.

Yep, sounds like a plan.

Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-20 Thread Thomas Spatzier
Excerpts from Steve Baker's message on 21.11.2013 00:00:47:
 From: Steve Baker sba...@redhat.com
 To: openstack-dev@lists.openstack.org,
 Date: 21.11.2013 00:04
 Subject: Re: [openstack-dev] [Heat] HOT software configuration
 refined after design summit discussions

 On 11/21/2013 11:41 AM, Clint Byrum wrote:
  Excerpts from Mike Spreitzer's message of 2013-11-20 13:46:25 -0800:
  Clint Byrum cl...@fewbar.com wrote on 11/19/2013 04:28:31 PM:

snip

 
  I am worried about the explosion of possibilities that comes from
trying
  to deal with all of the diff's possible inside an instance. If there is
an
  actual REST interface for a thing, then yes, let's use that. For
instance,
  if we are using docker, there is in fact a very straight forward way to
  say remove entity X. If we are using packages we have the same thing.
  However, if we are just trying to write chef configurations, we have to
  write reverse chef configurations.
 
  What I meant to convey is let's give this piece of the interface a lot
of
  thought. Not this is wrong to even have. Given a couple of days now,
  I think we do need apply and remove. We should also provide really
  solid example templates for this concept.
 You're right, I'm already starting to see issues with my current
approach.

 This smells like a new blueprint. I'll remove it from the scope of the
 current software config work and raise a blueprint to track
remove-config.

So I read thru those recent discussions and in parallel also started to
update the design wiki. BTW, nanjj renamed the wiki to [1] (but also made a
redirect from the previous ...-WIP page) and linked it as spec to BP [2].

I'll leave out the remove-config thing for now. While thinking about the
overall picture, I came up with some other comments:

I thought about the name SoftwareApplier some more and while it is clear
what it does (it applies a software config to a server), the naming is not
really consistent with all the other resources in Heat. Every other
resource type is called after the thing that you get when the template gets
instantiated (a Server, a FloatingIP, a VolumeAttachment etc). In
case of SoftwareApplier what you actually get from a user perspective is a
deployed instance of the piece of software described be a SoftwareConfig.
Therefore, I was calling it SoftwareDeployment orignally, because you get a
software deployment (according to a config). Any comments on that name?

If we think this thru with respect to remove-config (even though this
needs more thought), a SoftwareApplier (that thing itself) would not really
go to state DELETE_IN_PROGRESS during an update. It is always there on the
VM but the software it deploys gets deleted and then reapplied or
whatever ...

Now thinking more about update scenarios (which we can leave for an
iteration after the initial deployment is working), in my mental model it
would be more consistent to have information for handle_create,
handle_delete, handle_update kinds of events all defined in the
SoftwareConfig resource. SoftwareConfig for represent configuration
information for one specific piece of software, e.g. a web server. So it
could provide all the information you need to install it, to uninstall it,
or to update its config. By updating the SoftwareApplier's (or
SoftwareDeployment's - my preferred name) state at runtime, the in-instance
tools would grab the respective script of whatever an run it.

So SoftwareConfig could look like:

resources:
  my_webserver_config:
type: OS::Heat::SoftwareConfig
properties:
  http_port:
type: number
  # some more config props

  config_create: http://www.example.com/my_scripts/webserver/install.sh
  config_delete:
http://www.example.com/my_scripts/webserver/uninstall.sh
  config_update:
http://www.example.com/my_scripts/webserver/applyupdate.sh


At runtime, when a SoftwareApplier gets created, it looks for the
'config_create' hook and triggers that automation. When it gets deleted, it
looks for the 'config_delete' hook and so on. Only config_create is
mandatory.
I think that would also give us nice extensibility for future use cases.
For example, Heat today does not support something like stop-stack or
start-stack which would be pretty useful though. If we have it one day, we
would just add a 'config_start' hook to the SoftwareConfig.


[1]
https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-spec
[2] https://blueprints.launchpad.net/heat/+spec/hot-software-config

  ...
  A specific use-case I'm trying to address here is tripleo doing an
  update-replace on a nova compute node. The remove_config contains
the
  workload to evacuate VMs and signal heat when the node is ready to
be
  shut down. This is more involved than just uninstall the things.
 
  Could you outline in some more detail how you think this could be
  done?
  So for that we would not remove the software configuration for the
  nova-compute, we would assert that the machine needs vms 

Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-18 Thread Thomas Spatzier
Hi all,

I have reworked the wiki page [1] I created last week to reflect
discussions we had on the mail list and in IRC. From ML discussions last
week it looked like we were all basically on the same page (with some
details to be worked out), and I hope the new draft eliminates some
confusion that the original draft had.

[1] https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-WIP

Regards,
Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-18 Thread Thomas Spatzier
Steve Baker sba...@redhat.com wrote on 18.11.2013 21:52:04:
 From: Steve Baker sba...@redhat.com
 To: openstack-dev@lists.openstack.org,
 Date: 18.11.2013 21:54
 Subject: Re: [openstack-dev] [Heat] HOT software configuration
 refined after design summit discussions

 On 11/19/2013 02:22 AM, Thomas Spatzier wrote:
  Hi all,
 
  I have reworked the wiki page [1] I created last week to reflect
  discussions we had on the mail list and in IRC. From ML discussions
last
  week it looked like we were all basically on the same page (with some
  details to be worked out), and I hope the new draft eliminates some
  confusion that the original draft had.
 
  [1]
https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-WIP
 
 Thanks Thomas, this looks really good. I've actually started on a POC
 which maps to this model.

Good to hear that, Steve :-)
Now that we are converging, should we consolidate the various wiki pages
and just have one? E.g. copy the complete contents of
hot-software-config-WIP to your original hot-software-config, or deprecate
all others and make hot-software-config-WIP the master?


 I've used different semantics which you may actually prefer some of,
 please comment below.

 Resource types:
 SoftwareConfig - SoftwareConfig (yay!)
 SoftwareDeployment - SoftwareApplier - less typing, less mouth-fatigue

I'm ok with SoftwareApplier. If we don't hear objections, I can change it
in the wiki.


 SoftwareConfig properties:
 parameters - inputs - just because parameters is overloaded already.

Makes sense.

 Although if the CM tool has their own semantics for inputs then that
 should be used in that SoftwareConfig resource implementation instead.
 outputs - outputs

 SoftwareApplier properties:
 software_config - apply_config - because there will sometimes be a
 corresponding remove_config

Makes sense, and the remove_config thought is a very good point!

 server - server
 parameters - input_values - to match the 'inputs' schema property in
 SoftwareConfig

Agree on input_values.


 Other comments on hot-software-config-WIP:

 Regarding apply_config/remove_config, if a SoftwareApplier resource is
 deleted it should trigger any remove_config and wait for the server to
 acknowledge when that is complete. This allows for any
 evacuation/deregistering workloads to be executed.

 I'm unclear yet what the SoftwareConfig 'role' is for, unless the role
 specifies the contract for a given inputs and outputs schema? How would
 this be documented or enforced? I'm inclined to leave it out for now.

So about 'role', as I stated in the wiki, my thinking was that there will
be different SoftwareConfig and SoftwareApplier implementations per CM tool
(more on that below), since all CM tools will probably have their specific
metadata and runtime implementation. So in my example I was using Chef, and
'role' is just a Chef concept, i.e. you take a cookbook and configure a
specific Chef role on a server.


 It should be possible to write a SoftwareConfig type for a new CM tool
 as a provider template. This has some nice implications for deployers
 and users.

I think provider templates are a good thing to have clean componentization
for re-use. However, I think it still would be good to allow users to
define their SoftwareConfigs inline in a template for simple use cases. I
heard that requirement in several posts on the ML last week.
The question is whether we can live with a single implementation of
SoftwareConfig and SoftwareApplier then (see also below).


 My hope is that there will not need to be a different SoftwareApplier
 type written for each CM tool. But maybe there will be one for each
 delivery mechanism. The first implementation will use metadata polling
 and signals, another might use Marconi. Bootstrapping an image to
 consume a given CM tool and applied configuration data is something that
 we need to do, but we can make it beyond the scope of this particular
 proposal.

I was thinking about a single implementation, too. However, I cannot really
imagine how one single implementation could handle both the different
metadata of different CM tools, and different runtime implementation. I
think we would want to support at least a handful of the most favorite
tools, but cannot see at the moment how to cover them all in one
implementation. My thought was that there could be a super-class for common
behavior, and then plugins with specific behavior for each tool.

Anyway, all of that needs to be verified, so working on PoC patches is
definitely the right thing to do. For example, if we work on implementation
for two CM tools (e.g. Chef and simple scripts), we can probably see if one
common implementation is possible or not.
Someone from our team is going to write a provider for Chef to try things
out. I think that can be aligned nicely with your work.


 The POC I'm working on is actually backed by a REST API which does dumb
 (but structured) storage of SoftwareConfig and SoftwareApplier entities

Re: [openstack-dev] [Heat] Continue discussing multi-region orchestration

2013-11-14 Thread Thomas Spatzier
Hi Bartosz,

one thing that is still not clear to me:
In the discussion at the summit and in your updated architecture on the
wiki it talks about two Heat engines, one in each region, and on-top only
the dashboard. So what entity will really do the cross region
orchestration? In the discussions I heard people talking about the heat
client doing it. But wouldn't that duplicate functionality that is inside
the engine into the client, i.e. the client would become an orchestrator
itself then?

Regards,
Thomas

Bartosz Górski bartosz.gor...@ntti3.com wrote on 14.11.2013 15:58:39:
 From: Bartosz Górski bartosz.gor...@ntti3.com
 To: openstack-dev@lists.openstack.org,
 Date: 14.11.2013 16:05
 Subject: [openstack-dev] [Heat] Continue discussing multi-region
orchestration

 Hi all,

 At summit in Hong Kong we had a design session where we discussed adding
 multi-region orchestration support to Heat. During the session we had
 really heated discussion and spent most of the time on explaining the
 problem. I think it was really good starting point and right now more
 people have better understanding for this problem. I appreciate all the
 suggestions and concerns I got from you. I would like to continue this
 discussion here on the mailing list.

 I updated the etherpad after the session. If I forgot about something or
 wrote something that is not right feel free a please tell me about it.

 References:
 [1] Blueprint:

https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat

 [2] Etherpad:
 https://etherpad.openstack.org/p/icehouse-summit-heat-multi-region-cloud
 [3] Patch with POC version: https://review.openstack.org/#/c/53313/


 Best,
 Bartosz Górski
 NTTi3


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-13 Thread Thomas Spatzier
Excerpts form Clint Byrum's message on 12.11.2013 19:32:50:
 From: Clint Byrum cl...@fewbar.com
 To: openstack-dev openstack-dev@lists.openstack.org,
 Date: 12.11.2013 19:35
 Subject: Re: [openstack-dev] [Heat] HOT software configuration
 refined after design summit discussions

 Excerpts from Thomas Spatzier's message of 2013-11-11 08:57:58 -0800:
 
  Hi all,
 
  I have just posted the following wiki page to reflect a refined
proposal
snip
 Hi Thomas, thanks for spelling this out clearly.

 I am still -1 on anything that specifies the place a configuration is
 hosted inside the configuration definition itself. Because configurations
 are encapsulated by servers, it makes more sense to me that the servers
 (or server groups) would specify their configurations. If changing to a

IMO the current proposal does _not_ the concrete hosting inside component
definition. The component definition is in this external template file and
all we do is give it a pointer to the server at deploy time so that the
implementation can perform whatever is needed to perform at that time.
The resource in the actual template file is like the intermediate
association resource you are suggesting below (similar to what
VolumeAttachment does), so this is the place where you say which component
gets deployed where. This represents a concrete use of a software
component. Again, all we do is pass in a pointer to the server where _this
use_ of the software component shall be installed.

 more logical model is just too hard for TOSCA to adapt to, then I suggest
 this be an area that TOSCA differs from Heat. We don't need two models

The current proposal was done completely unrelated to TOSCA, but really
just a try to have a pragmatic approach for solving the use cases we talked
about. I don't really care in which directions the relations point. Both
ways can be easily mapped to TOSCA. I just think the current proposal is
intuitive, at least to me. And you could see it as kind of a short notation
that avoids another association class.

 for communicating configurations to servers, and I'd prefer Heat stay
 focused on making HOT template authors' and users' lives better.

 I have seen an alternative approach which separates a configuration
 definition from a configuration deployer. This at least makes it clear
 that the configuration is a part of a server. In pseudo-HOT:

 resources:
   WebConfig:
 type: OS::Heat::ChefCookbook
 properties:
   cookbook_url: https://some.test/foo
   parameters:
 endpoint_host:
   type: string
   WebServer:
 type: OS::Nova::Server
 properties:
   image: webserver
   flavor: 100
   DeployWebConfig:
 type: OS::Heat::ConfigDeployer
 properties:
   configuration: {get_resource: WebConfig}
   on_server: {get_resource: WebServer}
   parameters:
 endpoint_host: {get_attribute: [ WebServer, first_ip]}

The DeployWebConfig association class actually is the 'mysql' resource in
the template on the wiki page. See the Design alternative section I put it.
That would be fine with me as well.


snip


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-13 Thread Thomas Spatzier
Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote on 12.11.2013
21:27:13:
 From: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date: 12.11.2013 21:29
 Subject: Re: [openstack-dev] [Heat] HOT software configuration
 refined after design summit discussions

 Hi,

 I agree with Clint that component placement specified inside
 component configuration is not a right thing. I remember that mostly
 everyone agreed that hosted_on should not be in HOT templates.
 When one specify placement explicitly inside  a component definition
 it prevents the following:
 1. Reusability - you can't reuse component without creating its
 definition copy with another placement parameter.

See my reply to Clint's mail. The deployment location in form of the
server reference is _not_ hardcoded in the component definition. All we
do is to provide a pointer to the server where a software shall be deployed
at deploy time. You can use a component definition in many place, and in
each place where you use it you provide it a pointer to the target server.

 2. Composability - it will be no clear way to express composable
 configurations. There was a clear way in a template showed during
 design session where server had a list of components to be placed.

I think we have full composability with the deployment resources that
mark uses of software component definitions.

 3. Deployment order - some components should be placed in strict
 order and it will be much easier just make an ordered list of
 components then express artificial dependencies between them just
 for ordering.

With the deployment resources and Heat's normal way of handling
dependencies between any resources, we should be able have proper ordering.
I agree that strict ordering is probably the most easy way of doing it, but
we have implementations that do deployment in a more flexible manner
without any problems.


 Thanks
 Georgy


 On Tue, Nov 12, 2013 at 10:32 AM, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from Thomas Spatzier's message of 2013-11-11 08:57:58 -0800:
 
  Hi all,
 
  I have just posted the following wiki page to reflect a refined
proposal
  for HOT software configuration based on discussions at the design
summit
  last week. Angus also put a sample up in an etherpad last week, but we
did
  not have enough time to go thru it in the design session. My write-up
is
  based on Angus' sample, actually a refinement, and on discussions we
had in
  breaks, plus it is trying to reflect all the good input from ML
discussions
  and Steve Baker's initial proposal.
 
  https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-WIP
 
  Please review and provide feedback.

 Hi Thomas, thanks for spelling this out clearly.

 I am still -1 on anything that specifies the place a configuration is
 hosted inside the configuration definition itself. Because configurations
 are encapsulated by servers, it makes more sense to me that the servers
 (or server groups) would specify their configurations. If changing to a
 more logical model is just too hard for TOSCA to adapt to, then I suggest
 this be an area that TOSCA differs from Heat. We don't need two models
 for communicating configurations to servers, and I'd prefer Heat stay
 focused on making HOT template authors' and users' lives better.

 I have seen an alternative approach which separates a configuration
 definition from a configuration deployer. This at least makes it clear
 that the configuration is a part of a server. In pseudo-HOT:

 resources:
   WebConfig:
     type: OS::Heat::ChefCookbook
     properties:
       cookbook_url: https://some.test/foo
       parameters:
         endpoint_host:
           type: string
   WebServer:
     type: OS::Nova::Server
     properties:
       image: webserver
       flavor: 100
   DeployWebConfig:
     type: OS::Heat::ConfigDeployer
     properties:
       configuration: {get_resource: WebConfig}
       on_server: {get_resource: WebServer}
       parameters:
         endpoint_host: {get_attribute: [ WebServer, first_ip]}

 I have implementation questions about both of these approaches though,
 as it appears they'd have to reach backward in the graph to insert
 their configuration, or have a generic bucket for all configuration
 to be inserted. IMO that would look a lot like the method I proposed,
 which was to just have a list of components attached directly to the
 server like this:

 components:
   WebConfig:
     type: Chef::Cookbook
     properties:
       cookbook_url: https://some.test/foo
       parameters:
         endpoing_host:
           type: string
 resources:
   WebServer:
     type: OS::Nova::Server
     properties:
       image: webserver
       flavor: 100
     components:
       - webconfig:
         component: {get_component: WebConfig}
         parameters:
           endpoint_host: {get_attribute: [ WebServer, first_ip ]}

 Of course, 

Re: [openstack-dev] [Mistral] really simple workflow for Heat configuration tasks

2013-11-13 Thread Thomas Spatzier
Angus Salkeld asalk...@redhat.com wrote on 12.11.2013 23:05:57:
 From: Angus Salkeld asalk...@redhat.com
 To: openstack-dev@lists.openstack.org,
 Date: 12.11.2013 23:09
 Subject: Re: [openstack-dev] [Mistral] really simple workflow for
 Heat configuration tasks

 On 12/11/13 13:04 +0100, Thomas Spatzier wrote:
 Hi Angus,
 
 that is an interesting idea. Since you mentioned the software config
 proposal in the beginning as a related item, I guess you are trying to
 solve some software config related issues with Mistral. So a few
questions,
 looking at this purely from a software config perspect:
 
 Are you thinking about doing the infrastructure orchestration (VMs,
 volumes, network etc) with Heat's current capabilities and then let the
 complete software orchestration be handled by Mistral tasks? I.e.
bootstrap
 the workers on each VM and have the definition of when which agent does
 something defined in a flow?

 Well we either add an api to heat to do install_config or we use
 a service that is designed to do tasks. Clint convienced me quite
 easily that install/apply_config is just a task.

 
 If yes, is there a way for passing data around - e.g. output produced by
 one software config step is input for another software config step?
 
 Again, if my above assumption is true, couldn't there be problems when
we
 having two ways of doing orchestration, when the software layer thing
would
 take the Heat engine out of some processing and take away some control?
Or
 are you thinking about using Mistral as a general mechanism for task
 execution in Heat, which would then probably resolve the conflict?
 
 At this point we really do not need a flow, just a task concept
 from Mistral. Prehaps ways of grouping them and targeting them
 for a particular server.

 I'd see the config_deployer resource posting a task to Mistral
 and we have an agent in the server that can consume tasks and
 pass them to sub-agents that understand the particular format.

Ok, makes sense to me. And I don't see a conflict with the software config
proposal, but this is one of the implementation details we said need to
be figured out :-)


 If we do this then Heat is in charge of the orchestration and
 there are not two workflows fighting for control. I do agree
 that there should just be one.

 I think once Mistral is more mature we can decide whether to pass
 full workflow control over to it, but for now the task functionality
 is all we need. (and a time based one would be neat too btw).

 -Angus

 Regards,
 Thomas
 
 Angus Salkeld asalk...@redhat.com wrote on 12.11.2013 02:15:15:
  From: Angus Salkeld asalk...@redhat.com
  To: openstack-dev@lists.openstack.org,
  Date: 12.11.2013 02:18
  Subject: [openstack-dev] [Mistral] really simple workflow for Heat
  configuration tasks
 
  Hi all
 
  I think some of you were at the Software Config session at summit,
  but I'll link the ideas that were discussed:
 
https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-WIP
  https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config
 
  To me the basics of it are:
  1. we need an entity/resource to place the configuration (in Heat)
  2. we need a resource to install the configuration
  (basically a task in Mistral)
 
 
  A big issue to me is the conflict between heat's taskflow and the new
  external one. What I mean by conflict is that it will become tricky
  to manage two parallel taskflow instances in one stack.
 
  This could be solved by:
  1: totally using mistral (only use mistral workflow)
  2: use a very simple model of just asking mistral to run tasks (no
  workflow) this allows us to use heat's workflow
  but mistral's task runner.
 
  Given that mistral has no real implementation yet, 2 would seem
  reasonable to me. (I think Heat developers are open to 1 when
  Mistral is more mature.)
 
  How could we use Mistral for config installation?
  -
  1. We have a resource type in Heat that creates tasks in a Mistral
  workflow (manual workflow).
  2. Heat pre-configures the server to have a Mistral worker
  installed.
  3. the Mistral worker pulls tasks from the workflow and passes them
  to an agent that can run it. (the normal security issues jump up
  here - giving access to the taskflow from a guest).
 
  To do this we need an api that can add tasks to a workflow
dynamically.
  like this:
  - create a simple workflow
  - create and run task A [run on server X]
  - create and run task B [run on server Y]
  - create and run task C [run on server X]
 
  (note: the task is run and completes before the next is added if there
  is a dependancy, if tasks can be run in parallel then we add
 multiple
  tasks)
 
  The api could be something like:
  CRUD mistral/workflows/
  CRUD mistral/workflows/wf/tasks
 
 
  One thing that I am not sure of is how a server(worker) would know if
a
 task
  was for it or not.
  - perhaps we have

Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-13 Thread Thomas Spatzier
Zane Bitter zbit...@redhat.com wrote on 13.11.2013 18:11:18:
 From: Zane Bitter zbit...@redhat.com
 To: openstack-dev@lists.openstack.org,
 Date: 13.11.2013 18:14
 Subject: Re: [openstack-dev] [Heat] HOT software configuration
 refined after design summit discussions

 On 11/11/13 17:57, Thomas Spatzier wrote:
snip
 
  https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-WIP
 
  Please review and provide feedback.

 I believe there's an error in the Explicit dependency section, where it
 says that depends_on is a property. In cfn DependsOn actually exists at
 the same level as Type, Properties, c.

 resources:
client:
  type: My::Software::SomeClient
  properties:
server: { get_resource: my_server }
params:
  # params ...
  depends_on:
- get_resource: server_process1
- get_resource: server_process2

Good point. I think the reason was tied too much to the provider template
concept where all properties get passed automatically to the provider
template and in there you can basically do anything that is necessary,
including hanlding dependencies. But I was missing the fact that this is a
generic concept for all resources.
I'll fix it in the wiki.


 And conceptually this seems correct, because it applies to any kind of
 resource, whereas properties are defined per-resource-type.

 Don't be fooled by our implementation:
 https://review.openstack.org/#/c/44733/

 It also doesn't support a list, but I think we can and should fix that
 in HOT.

Doesn't DependsOn already support lists? I quickly checked the code and it
seems it does:
https://github.com/openstack/heat/blob/master/heat/engine/resource.py#L288


 cheers,
 Zane.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] really simple workflow for Heat configuration tasks

2013-11-12 Thread Thomas Spatzier
Hi Angus,

that is an interesting idea. Since you mentioned the software config
proposal in the beginning as a related item, I guess you are trying to
solve some software config related issues with Mistral. So a few questions,
looking at this purely from a software config perspect:

Are you thinking about doing the infrastructure orchestration (VMs,
volumes, network etc) with Heat's current capabilities and then let the
complete software orchestration be handled by Mistral tasks? I.e. bootstrap
the workers on each VM and have the definition of when which agent does
something defined in a flow?

If yes, is there a way for passing data around - e.g. output produced by
one software config step is input for another software config step?

Again, if my above assumption is true, couldn't there be problems when we
having two ways of doing orchestration, when the software layer thing would
take the Heat engine out of some processing and take away some control? Or
are you thinking about using Mistral as a general mechanism for task
execution in Heat, which would then probably resolve the conflict?

Regards,
Thomas

Angus Salkeld asalk...@redhat.com wrote on 12.11.2013 02:15:15:
 From: Angus Salkeld asalk...@redhat.com
 To: openstack-dev@lists.openstack.org,
 Date: 12.11.2013 02:18
 Subject: [openstack-dev] [Mistral] really simple workflow for Heat
 configuration tasks

 Hi all

 I think some of you were at the Software Config session at summit,
 but I'll link the ideas that were discussed:
 https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-WIP
 https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config

 To me the basics of it are:
 1. we need an entity/resource to place the configuration (in Heat)
 2. we need a resource to install the configuration
 (basically a task in Mistral)


 A big issue to me is the conflict between heat's taskflow and the new
 external one. What I mean by conflict is that it will become tricky
 to manage two parallel taskflow instances in one stack.

 This could be solved by:
 1: totally using mistral (only use mistral workflow)
 2: use a very simple model of just asking mistral to run tasks (no
 workflow) this allows us to use heat's workflow
 but mistral's task runner.

 Given that mistral has no real implementation yet, 2 would seem
 reasonable to me. (I think Heat developers are open to 1 when
 Mistral is more mature.)

 How could we use Mistral for config installation?
 -
 1. We have a resource type in Heat that creates tasks in a Mistral
 workflow (manual workflow).
 2. Heat pre-configures the server to have a Mistral worker
 installed.
 3. the Mistral worker pulls tasks from the workflow and passes them
 to an agent that can run it. (the normal security issues jump up
 here - giving access to the taskflow from a guest).

 To do this we need an api that can add tasks to a workflow dynamically.
 like this:
 - create a simple workflow
 - create and run task A [run on server X]
 - create and run task B [run on server Y]
 - create and run task C [run on server X]

 (note: the task is run and completes before the next is added if there
 is a dependancy, if tasks can be run in parallel then we add
multiple
 tasks)

 The api could be something like:
 CRUD mistral/workflows/
 CRUD mistral/workflows/wf/tasks


 One thing that I am not sure of is how a server(worker) would know if a
task
 was for it or not.
 - perhaps we have a capability property of the task that we can use
(capablitiy[server] = server-id) or actually specify the worker we
want.

 I think this would be a good starting point for Mistral as it is a
 very simple but concrete starting point. Also if this is not done in
 Mistral we will have to add this in Heat (lets rather have it where
 it should be). This will also give us a chance to have confidence
 with Mistral before trying to do more complex workflows.

 If you (Heat and Mistral developers) are open to this we can discuss
 what needs to be done. I am willing to help with implementation.

 Thanks
 -Angus




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-11 Thread Thomas Spatzier

Hi all,

I have just posted the following wiki page to reflect a refined proposal
for HOT software configuration based on discussions at the design summit
last week. Angus also put a sample up in an etherpad last week, but we did
not have enough time to go thru it in the design session. My write-up is
based on Angus' sample, actually a refinement, and on discussions we had in
breaks, plus it is trying to reflect all the good input from ML discussions
and Steve Baker's initial proposal.

https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-WIP

Please review and provide feedback.

Regards,
Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Comments on Steve Baker's Proposal on HOT Software Config

2013-10-31 Thread Thomas Spatzier
Zane Bitter zbit...@redhat.com wrote on 30.10.2013 22:33:31:
 From: Zane Bitter zbit...@redhat.com
 To: openstack-dev@lists.openstack.org,
 Date: 30.10.2013 22:36
 Subject: Re: [openstack-dev] [Heat] Comments on Steve Baker's
 Proposal on HOT Software Config

 On 30/10/13 20:35, Lakshminaraya Renganarayana wrote:
   I'd like to see some more detail about how
inputs/outputs would be exposed in the configuration management
systems
- or, more specifically, how the user can extend this to arbitrary
configuration management systems.
 
  The way inputs/outputs are exposed in a CM system
  would depend on its conventions. In our use with Chef, we expose these
  inputs and outputs as a Chef's node attributes, i.e., via the node[][]
  hash. I could imagine a similar scheme for Puppet. For a shell type of
  CM provider the inputs/outputs can be exposed as Shell environment
  variables. To avoid name conflicts, these inputs/outputs can be
prefixed
  by a namespace, say Heat.

 Right, so who writes the code that exposes the inputs/outputs to the CM
 system in that way? If it is the user, where does that code go and how
 does it work? And if it's not the user, how would the user accommodate a
 CM system that has not been envisioned by their provider? That's what
 I'm trying to get at with this question.

I think it is not the user who writes this binding code / glue code, but I
envision a handful of component providers for the most common CM systems
that implement the logic to (1) take data from Heat, (2) pass them to the
CM system invocation, (3) wait for the CM system to report completion, (4)
parse return data and pass it to Heat.
So this is basically like an adapter, and for each such adapter it would
have to be documented like the automation must be writen to work with it.
E.g. in bash scripts the user can access all input as environment variables
etc, or for Chef there mapping is like Lakshmi described it above.
We have something like this implemented internally and I think such an
implementation could be added to Heat.

For any kind of custom CM system where we do not provide such an adapter,
the user would have to write a plugin to handle the same logic.


 cheers,
 Zane.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-25 Thread Thomas Spatzier
Hi Keith,

thanks for sharing your opinion. That seems to make sense, and I know
Adrian was heavily involved in discussion at the Portland summit,. so seems
like the right contacts are hooked up.
Looking forward to the discussions at the summit.

Regards,
Thomas

Keith Bray keith.b...@rackspace.com wrote on 25.10.2013 02:23:55:

 From: Keith Bray keith.b...@rackspace.com
 To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org,
 Date: 25.10.2013 02:31
 Subject: Re: [openstack-dev] [Heat] HOT Software configuration proposal

 Hi Thomas, here's my opinion:  Heat and Solum contributors will work
 closely together to figure out where specific feature implementations
 belong... But, in general, Solum is working at a level above Heat.  To
 write a Heat template, you have to know about infrastructure setup and
 configuration settings of infrastructure and API services.  I believe
 Solum intends to provide the ability to tweak and configure the amount of
 complexity that gets exposed or hidden so that it becomes easier for
cloud
 consumers to just deal with their application and not have to necessarily
 know or care about the underlying infrastructure and API services, but
 that level of detail can be exposed to them if necessary. Solum will know
 what infrastructure and services to set up to run applications, and it
 will leverage Heat and Heat templates for this.

 The Solum project has been very vocal about leveraging Heat under the
hood
 for the functionality and vision of orchestration that it intends to
 provide.  It seems, based on this thread (and +1 from me), enough people
 are interested in having Heat provide some level of software
 orchestration, even if it's just bootstrapping other CM tools and
 coordinating the when are you done, and I haven't heard any Solum folks
 object to Heat implementing software orchestration capabilities... So,
I'm
 looking forward to great discussions on this topic for Heat at the
summit.
  If you recall, Adrian Otto (who announced project Solum) was also the
one
 who was vocal at the Portland summit about the need for HOT syntax.  I
 think both projects are on a good path with a lot of fun collaboration
 time ahead.

 Kind regards,
 -Keith

 On 10/24/13 7:56 AM, Thomas Spatzier thomas.spatz...@de.ibm.com
wrote:

 Hi all,
 
 maybe a bit off track with respect to latest concrete discussions, but I
 noticed the announcement of project Solum on openstack-dev.
 Maybe this is playing on a different level, but I still see some
relation
 to all the software orchestration we are having. What are your opinions
on
 this?
 
 BTW, I just posted a similar short question in reply to the Solum
 announcement mail, but some of us have mail filters an might read [Heat]
 mail with higher prio, and I was interested in the Heat view.
 
 Cheers,
 Thomas
 
 Patrick Petit patrick.pe...@bull.net wrote on 24.10.2013 12:15:13:
  From: Patrick Petit patrick.pe...@bull.net
  To: OpenStack Development Mailing List
 openstack-dev@lists.openstack.org,
  Date: 24.10.2013 12:18
  Subject: Re: [openstack-dev] [Heat] HOT Software configuration
proposal
 
  Sorry, I clicked the 'send' button too quickly.
 
  On 10/24/13 11:54 AM, Patrick Petit wrote:
   Hi Clint,
   Thank you! I have few replies/questions in-line.
   Cheers,
   Patrick
   On 10/23/13 8:36 PM, Clint Byrum wrote:
   Excerpts from Patrick Petit's message of 2013-10-23 10:58:22 -0700:
   Dear Steve and All,
  
   If I may add up on this already busy thread to share our
experience
   with
   using Heat in large and complex software deployments.
  
   Thanks for sharing Patrick, I have a few replies in-line.
  
   I work on a project which precisely provides additional value at
the
   articulation point between resource orchestration automation and
   configuration management. We rely on Heat and chef-solo
respectively
   for
   these base management functions. On top of this, we have developed
 an
   event-driven workflow to manage the life-cycles of complex
software
   stacks which primary purpose is to support middleware components
as
   opposed to end-user apps. Our use cases are peculiar in the sense
 that
   software setup (install, config, contextualization) is not a
 one-time
   operation issue but a continuous thing that can happen any time in
   life-span of a stack. Users can deploy (and undeploy) apps long
time
   after the stack is created. Auto-scaling may also result in an
   asynchronous apps deployment. More about this latter. The
framework
 we
   have designed works well for us. It clearly refers to a PaaS-like
   environment which I understand is not the topic of the HOT
software
   configuration proposal(s) and that's absolutely fine with us.
 However,
   the question for us is whether the separation of software config
 from
   resources would make our life easier or not. I think the answer is
   definitely yes but at the condition that the DSL extension
preserves
   almost everything from the expressiveness

Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-24 Thread Thomas Spatzier
Hi all,

maybe a bit off track with respect to latest concrete discussions, but I
noticed the announcement of project Solum on openstack-dev.
Maybe this is playing on a different level, but I still see some relation
to all the software orchestration we are having. What are your opinions on
this?

BTW, I just posted a similar short question in reply to the Solum
announcement mail, but some of us have mail filters an might read [Heat]
mail with higher prio, and I was interested in the Heat view.

Cheers,
Thomas

Patrick Petit patrick.pe...@bull.net wrote on 24.10.2013 12:15:13:
 From: Patrick Petit patrick.pe...@bull.net
 To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org,
 Date: 24.10.2013 12:18
 Subject: Re: [openstack-dev] [Heat] HOT Software configuration proposal

 Sorry, I clicked the 'send' button too quickly.

 On 10/24/13 11:54 AM, Patrick Petit wrote:
  Hi Clint,
  Thank you! I have few replies/questions in-line.
  Cheers,
  Patrick
  On 10/23/13 8:36 PM, Clint Byrum wrote:
  Excerpts from Patrick Petit's message of 2013-10-23 10:58:22 -0700:
  Dear Steve and All,
 
  If I may add up on this already busy thread to share our experience
  with
  using Heat in large and complex software deployments.
 
  Thanks for sharing Patrick, I have a few replies in-line.
 
  I work on a project which precisely provides additional value at the
  articulation point between resource orchestration automation and
  configuration management. We rely on Heat and chef-solo respectively
  for
  these base management functions. On top of this, we have developed an
  event-driven workflow to manage the life-cycles of complex software
  stacks which primary purpose is to support middleware components as
  opposed to end-user apps. Our use cases are peculiar in the sense
that
  software setup (install, config, contextualization) is not a one-time
  operation issue but a continuous thing that can happen any time in
  life-span of a stack. Users can deploy (and undeploy) apps long time
  after the stack is created. Auto-scaling may also result in an
  asynchronous apps deployment. More about this latter. The framework
we
  have designed works well for us. It clearly refers to a PaaS-like
  environment which I understand is not the topic of the HOT software
  configuration proposal(s) and that's absolutely fine with us.
However,
  the question for us is whether the separation of software config from
  resources would make our life easier or not. I think the answer is
  definitely yes but at the condition that the DSL extension preserves
  almost everything from the expressiveness of the resource element. In
  practice, I think that a strict separation between resource and
  component will be hard to achieve because we'll always need a little
  bit
  of application's specific in the resources. Take for example the
  case of
  the SecurityGroups. The ports open in a SecurityGroup are application
  specific.
 
  Components can only be made up of the things that are common to all
  users
  of said component. Also components would, if I understand the concept
  correctly, just be for things that are at the sub-resource level.
  Security groups and open ports would be across multiple resources, and
  thus would be separately specified from your app's component (though
it
  might be useful to allow components to export static values so that
the
  port list can be referred to along with the app component).
 Okay got it. If that's the case then that would work
 
  Then, designing a Chef or Puppet component type may be harder than it
  looks at first glance. Speaking of our use cases we still need a
little
  bit of scripting in the instance's user-data block to setup a working
  chef-solo environment. For example, we run librarian-chef prior to
  starting chef-solo to resolve the cookbook dependencies. A cookbook
can
  present itself as a downloadable tarball but it's not always the
  case. A
  chef component type would have to support getting a cookbook from a
  public or private git repo (maybe subversion), handle situations
where
  there is one cookbook per repo or multiple cookbooks per repo, let
the
  user choose a particular branch or label, provide ssh keys if it's a
  private repo, and so forth. We support all of this scenarios and so
we
  can provide more detailed requirements if needed.
 
  Correct me if I'm wrong though, all of those scenarios are just
  variations
  on standard inputs into chef. So the chef component really just has to
  allow a way to feed data to chef.
 
 That's correct. Boils down to specifying correctly all the constraints
 that apply to deploying a cookbook in an instance from it's component
 description.
 
  I am not sure adding component relations like the 'depends-on' would
  really help us since it is the job of config management to handle
  software dependencies. Also, it doesn't address the issue of circular
  dependencies. Circular dependencies occur in complex software 

Re: [openstack-dev] Announcing Project Solum

2013-10-24 Thread Thomas Spatzier
Hi Adrian,

really intersting! I wonder what the relation to all the software
orchestration in Heat discussions is that have been going on for a while
now.

Regards,
Thomas

Adrian Otto adrian.o...@rackspace.com wrote on 23.10.2013 21:03:10:
 From: Adrian Otto adrian.o...@rackspace.com
 To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org,
 Date: 23.10.2013 21:07
 Subject: [openstack-dev] Announcing Project Solum

 OpenStack,

 OpenStack has emerged as the preferred choice for open cloud
 software worldwide. We use it to power our cloud, and we love it.
 We’re proud to be a part of growing its capabilities to address more
 needs every day. When we ask customers, partners, and community
 members about what problems they want to solve next, we have
 consistently found a few areas where OpenStack has room to grow in
 addressing the needs of software developers:

 1)   Ease of application development and deployment via integrated
 support for Git, CI/CD, and IDEs

 2)   Ease of application lifecycle management across dev, test, and
 production types of environments -- supported by the Heat project’s
 automated orchestration (resource deployment, monitoring-based self-
 healing, auto-scaling, etc.)

 3)   Ease of application portability between public and private
 clouds -- with no vendor-driven requirements within the application
 stack or control system

 Along with eBay, RedHat, Ubuntu/Canonical, dotCloud/Docker,
 Cloudsoft, and Cumulogic, we at Rackspace are happy to announce we
 have started project Solum as an OpenStack Related open source
 project. Solum is a community-driven initiative currently in its
 open design phase amongst the seven contributing companies with more to
come.

 We plan to leverage the capabilities already offered in OpenStack in
 addressing these needs so anyone running an OpenStack cloud can make
 it easier to use for developers. By leveraging your existing
 OpenStack cloud, the aim of Project Solum is to reduce the number of
 services you need to manage in tackling these developer needs. You
 can use all the OpenStack services you already run instead of
 standing up overlapping, vendor-specific capabilities to accomplish this.

 We welcome you to join us to build this exciting new addition to the
 OpenStack ecosystem.

 Project Wiki
 https://wiki.openstack.org/wiki/Solum

 Lauchpad Project
 https://launchpad.net/solum

 IRC
 Public IRC meetings are held on Tuesdays 1600 UTC
 irc://irc.freenode.net:6667/solum

 Thanks,

 Adrian Otto___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-23 Thread Thomas Spatzier
Clint Byrum cl...@fewbar.com wrote on 23.10.2013 00:28:17:
 From: Clint Byrum cl...@fewbar.com
 To: openstack-dev openstack-dev@lists.openstack.org,
 Date: 23.10.2013 00:30
 Subject: Re: [openstack-dev] [Heat] HOT Software configuration proposal

 Excerpts from Georgy Okrokvertskhov's message of 2013-10-22 13:32:40
-0700:
  Hi Thomas,
 
  I agree with you on semantics part. At the same time I see a potential
  question which might appear - if semantics is limited by few states
visible
  for Heat engine, then who actually does software orchestration?
  Will it be reasonable then to have software orchestration as separate
  subproject for Heat as a part of Orchestration OpenStack program?
Heat
  engine will then do dependency tracking and will use components as a
  reference for software orchestration engine which will perform actual
  deployment and high level software components coordination.
 
  This separated software orchestration engine may address all specific
  requirements proposed by different teams in this thread without
affecting
  existing Heat engine.
 

 I'm not sure I know what software orchestration is, but I will take a
 stab at a succinct definition:

 Coordination of software configuration across multiple hosts.

 If that is what you mean, then I believe what you actually want is
 workflow. And for that, we have the Mistral project which was recently
 announced [1].

My view of software orchestration in a sense of what Heat should be able
to do is being able to bring up software installations (e.g. web server, a
DBMS, a custom application) on-top of a bare compute resource by invoking a
software config tool (e.g. Chef, Puppet ...) at the right point in time and
let that tool do the actual work.
Invoke does not necessarily mean to call an API of such a tool, but rather
make sure it is bootstrapped and maybe gets a go signal to start.
software orchestration could then further mean to give CM tools across
hosts the go signal when the config on one host has completed. This be
enabled by the signaling enhancements Steve Baker mentioned in one of his
recent mails.

For such kind of stuff, I think we could live without workflows but do it
purely declaratively. Of course, a workflow could be the underlying
mechanism, but I would not want to express this in a template. If users
have very complex problems to solve and cannot live with the simple
software orchestration I outlined, then still a workflow could be used for
everying on-top of the OS.

Anyway, that was just my view and others most probably have different views
again, so seems like we really have to sort out terminology :-)


 Use that and you will simply need to define your desired workflow and
 feed it into Mistral using a Mistral Heat resource. We can create a
 nice bootstrapping resource for Heat instances that shims the mistral
 workflow execution agent into machines (or lets us use one already there
 via custom images).

 I can imagine it working something like this:

 resources:
   mistral_workflow_handle:
 type: OS::Mistral::WorkflowHandle
   web_server:
 type: OS::Nova::Server
 components:
   mistral_agent:
 component_type: mistral
 params:
   workflow_: {ref: mistral_workflow_handle}
   mysql_server:
 type: OS::Nova::Server
 components:
   mistral_agent:
 component_type: mistral
 params:
   workflow_handle: {ref: mistral_workflow_handle}
   mistral_workflow:
 type: OS::Mistral::Workflow
 properties:
   handle: {ref: mistral_workflow_handle}
   workflow_reference: mysql_webapp_workflow
   params:
 mysql_server: {ref: mysql_server}
 webserver: {ref: web_server}


While I can imagine that this works, I think for a big percentage of use
cases I would be nice to avoid this inter-weaving of workflow constructs
with a HOT template. I think we could do a purely declarative approach (if
we scope software orchestration in context of Heat right), and not define
such handles and references.
We are trying to shield this from the users in other cases in HOT
(WaitConditionHandle and references), so why introduce it here ...


 And then the workflow is just defined outside of the Heat template (ok
 I'm sure somebody will want to embed it, but I prefer stronger
 separation). Something like this gets uploaded as
 mysql_webapp_workflow:

 [ 'step1': 'install_stuff',
   'step2': 'wait(step1)',
   'step3': 'allocate_sql_user(server=%mysql_server%)'
   'step4': 'credentials=wait_and_read(step3)'
   'step5': 'write_config_file(server=%webserver%)' ]

 Or maybe it is declared as a graph, or whatever, but it is not Heat's
 problem how to do workflows, it just feeds the necessary data from
 orchestration into the workflow engine. This also means you can use a
 non OpenStack workflow engine without any problems.

 I think after having talked about this, we should have workflow live in
 its own program.. we can always combine them if we want to, 

Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-22 Thread Thomas Spatzier
 Steve Baker sba...@redhat.com wrote on 21.10.2013 23:02:47:
 From: Steve Baker sba...@redhat.com
 To: openstack-dev@lists.openstack.org,
 Date: 21.10.2013 23:06
 Subject: Re: [openstack-dev] [Heat] HOT Software configuration proposal

 On 10/22/2013 08:45 AM, Mike Spreitzer wrote:
 Steve Baker sba...@redhat.com wrote on 10/15/2013 06:48:53 PM:

  I've just written some proposals to address Heat's HOT software
  configuration needs, and I'd like to use this thread to get some
feedback:
  https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config
  https://wiki.openstack.org/wiki/Heat/Blueprints/native-tools-
 bootstrap-config
 
  Please read the proposals and reply to the list with any comments or
  suggestions.

 Can you confirm whether I have got the big picture right?  I think
 some of my earlier remarks were mistaken.

 You propose to introduce the concept of component and recognize
 software configuration as a matter of invoking components --- with a
 DAG of data dependencies among the component invocations.  While
 this is similar to what today's heat engine does for resources, you
 do NOT propose that the heat engine will get in the business of
 invoking components.  Rather: each VM will run a series of component
 invocations, and in-VM mechanisms will handle the cross-component
 synchronization and data communication.
 This is basically correct, except that in-VM mechanisms won't know
 much about cross-component synchronization and data communication.
 They will just execute whatever components are available to be
 executed, and report back values to heat-engine by signalling to
 waitconditions.
  You propose to add a bit of sugaring for the wait conditionhandle
 mechanism, and the heat engine will do the de-sugaring.
 Yes, I think improvements can be made on what I proposed, such as
 every component signalling when it is complete, and optionally
 including a return value in that signal.

Being able to handle completion of single components inside a VM and being
able to pass outputs of those components seems important to me. I think
that should mostly address the requirements for declaring data dependencies
between components that have been discussed before in this thread. If
wait-conditions are the underlying mechanism, fine, as longs as we can hid
it from the template syntax.

For example, something like this should be possible:

components:
  comp_a:
type: OS::Heat::SomeCMProvider
properties:
  prop1: { get_param: param1 }
  prop2: ...
  comp_b:
type: OS::Heat::SomeCMProvider
properties:
  propA: { get_attr: [ comp_a, some_attr ] }

resources:
  serverA:
type: OS::Nova::Server
# ...
components:
  - comp_a
  serverB:
type: OS::Nova::Server
# ...
components:
  - comp_b

I.e. there are two components comp_a and comp_b on two different servers.
comp_a has a data dependency on an attribute of comp_b. If we treat
'properties' as input to components and 'attributes' as output (the way it
is currently done for resources), that should be doable.
BTW, the convention of properties being input and attributes being output,
i.e. that subtle distinction between properties and attributes is not
really intuitive, at least not to me as non-native speaker, because I used
to use both words as synonyms.
Anyway it seems like the current proposal is a starting point with
enhancements on the roadmap, right?

  Each component is written in one of a few supported configuration
 management (CM) frameworks, and essentially all component
 invocations on a given VM invoke components of the same CM framework
 (with possible exceptions for one or two really basic ones).
 Rather than being limited to a few supported CM tools, I like the
 idea of some kind of provider mechanism so that users or heat admins
 can add support for new CM tools. This implies that it needs to be
 possible to add a component type without requiring custom python
 that runs on heat engine.
 The heat engine gains the additional responsibility of making sure
 that the appropriate CM framework(s) is(are) bootstrapped in each VM.
 Maybe. Or it might be up to the user to invoke images that already
 have the CM tools installed, or the user can provide a custom
 component provider which installs the tool in the way that they want.

 As for the cross-component synchronization and data communication
 question, at this stage I'm not comfortable with bringing something
 like zookeeper into the mix for a general solution for inter-
 component communication.  If heat engine handles resource
 dependencies and zookeeper handles software configuration
 dependencies this would result in the state of the stack being split
 between two different co-ordination mechanisms.

I think that zookeeper could be an _optional_ backend for this, but using
the current mechanisms in Heat should probably be the primary or default
way of doing this.


 We've put quite some effort into heat engine to co-ordinate 

Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-22 Thread Thomas Spatzier
Zane Bitter zbit...@redhat.com wrote on 22.10.2013 15:24:28:
 From: Zane Bitter zbit...@redhat.com
 To: openstack-dev@lists.openstack.org,
 Date: 22.10.2013 15:27
 Subject: Re: [openstack-dev] [Heat] HOT Software configuration proposal

 On 22/10/13 09:15, Thomas Spatzier wrote:
  BTW, the convention of properties being input and attributes being
output,
  i.e. that subtle distinction between properties and attributes is not
  really intuitive, at least not to me as non-native speaker, because I
used
  to use both words as synonyms.

 As a native speaker, I can confidently state that it's not intuitive to
 anyone ;)

Phew, good to read that ;-)


 We unfortunately inherited these names from the Properties section and
 the Fn::GetAtt function in cfn templates. It's even worse than that,
 because there's a whole category of... uh... things (DependsOn,
 DeletionPolicy, c.) that don't even have a name - I always have to
 resist the urge to call them 'attributes' too.

So is this something we should try to get straight in HOT while we still
have the flexibility?
Regarding properties/attributes for example, to me I would call both just
properties of a resource or component, and then I can write them or read
them like:

components:
  my_component:
type: ...
properties:
  my_prop: { get_property: [ other_component, other_component_prop ] }

  other_component:
# ...

I.e. you write property 'my_prop' of 'my_component' in its properties
section, and you read property 'other_component_prop' of 'other_component'
using the get_property function.
... we can also call them attributes, but use one name, not two different
names for the same thing.


 - ZB

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-22 Thread Thomas Spatzier
Zane Bitter zbit...@redhat.com wrote on 22.10.2013 17:23:52:
 From: Zane Bitter zbit...@redhat.com
 To: openstack-dev@lists.openstack.org,
 Date: 22.10.2013 17:26
 Subject: Re: [openstack-dev] [Heat] HOT Software configuration proposal

 On 22/10/13 16:35, Thomas Spatzier wrote:
  Zane Bitter zbit...@redhat.com wrote on 22.10.2013 15:24:28:
  From: Zane Bitter zbit...@redhat.com
  To: openstack-dev@lists.openstack.org,
  Date: 22.10.2013 15:27
  Subject: Re: [openstack-dev] [Heat] HOT Software configuration
proposal
 
  On 22/10/13 09:15, Thomas Spatzier wrote:
  BTW, the convention of properties being input and attributes being
  output,
  i.e. that subtle distinction between properties and attributes is not
  really intuitive, at least not to me as non-native speaker, because I
  used
  to use both words as synonyms.
 
  As a native speaker, I can confidently state that it's not intuitive
to
  anyone ;)
 
  Phew, good to read that ;-)
 
 
  We unfortunately inherited these names from the Properties section and
  the Fn::GetAtt function in cfn templates. It's even worse than that,
  because there's a whole category of... uh... things (DependsOn,
  DeletionPolicy, c.) that don't even have a name - I always have to
  resist the urge to call them 'attributes' too.
 
  So is this something we should try to get straight in HOT while we
still
  have the flexibility?

 Y-yes. Provided that we can do it without making things *more*
 confusing, +1. That's hard though, because there are a number of places
 we have to refer to them, all with different audiences:
   - HOT users
   - cfn users
   - Existing developers
   - New developers
   - Plugin developers

 and using different names for the same thing can cause problems. My test
 for this is: if you were helping a user on IRC debug an issue, is there
 a high chance you would spend 15 minutes talking past each other because
 they misunderstand the terminology?

Hm, good point. Seems like it would really cause more confusion than it
helps. So back away from the general idea of renaming things that exist
both in cfn and HOT.
What we should try of course is to give new concepts that will only exist
in HOT intuitive names.


  Regarding properties/attributes for example, to me I would call both
just
  properties of a resource or component, and then I can write them or
read
  them like:
 
  components:
 my_component:
   type: ...
   properties:
 my_prop: { get_property: [ other_component,
other_component_prop ] }
 
 other_component:
   # ...
 
  I.e. you write property 'my_prop' of 'my_component' in its properties
  section, and you read property 'other_component_prop' of
'other_component'
  using the get_property function.
  ... we can also call them attributes, but use one name, not two
different
  names for the same thing.

 IMO inputs (Properties) and outputs (Fn::GetAtt) are different things
 (and they exist in different namespaces), so -1 for giving them the same
 name.

 In an ideal world I'd like HOT to use something like get_output_data (or
 maybe just get_data), but OTOH we have e.g. FnGetAtt() and
 attributes_schema baked in to the plugin API that we can't really
 change, so it seems likely to lead to developers and users adopting
 different terminology, or making things very difficult for new
 developers, or both :(

 cheers,
 Zane.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-22 Thread Thomas Spatzier
Stan Lagun sla...@mirantis.com wrote on 22.10.2013 19:02:38:
 From: Stan Lagun sla...@mirantis.com
 To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org,
 Date: 22.10.2013 19:06
 Subject: Re: [openstack-dev] [Heat] HOT Software configuration proposal

 Hello,

 I've been reading through the thread and the wiki pages and I'm
 still confused by the terms. Is there a clear definition of what do
 we understand by component from user's and developer's point of
 view. If I write component, type:MySQL what is behind that
 definition? I mean how does the system know what exactly MySQL is

I think the current proposal is not that Heat would support very specific
component types (like MySQL, Apache, Tomcat etc.) but component is more of
a generic construct to represent a piece of software. The type attribute of
a component then just calls out the config management tool (e.g. Chef) to
install and configure that piece of software. By pointing a component to,
say, a Chef cookbook for setting up MySQL the runtime type would be MySQL.
That is at least my view on this.

I agree, however, that it needs to be straightened out how the term
component is really used.

 and how to install it? What MySQL version is it gonna be? Will it be
 x86 or x64? How does the system understand that I need MySQL for
 Windows on Windows VM rather then Linux MySQL? What do I as a
 developer need to do so that it would be possible to have type:
 MyCoolComponentType?


 On Tue, Oct 22, 2013 at 8:35 PM, Thomas Spatzier
thomas.spatz...@de.ibm.com
  wrote:
 Zane Bitter zbit...@redhat.com wrote on 22.10.2013 17:23:52:
  From: Zane Bitter zbit...@redhat.com
  To: openstack-dev@lists.openstack.org,
  Date: 22.10.2013 17:26
  Subject: Re: [openstack-dev] [Heat] HOT Software configuration proposal
 
  On 22/10/13 16:35, Thomas Spatzier wrote:
   Zane Bitter zbit...@redhat.com wrote on 22.10.2013 15:24:28:
   From: Zane Bitter zbit...@redhat.com
   To: openstack-dev@lists.openstack.org,
   Date: 22.10.2013 15:27
   Subject: Re: [openstack-dev] [Heat] HOT Software configuration
 proposal
  
   On 22/10/13 09:15, Thomas Spatzier wrote:
   BTW, the convention of properties being input and attributes being
   output,
   i.e. that subtle distinction between properties and attributes is
not
   really intuitive, at least not to me as non-native speaker, because
I
   used
   to use both words as synonyms.
  
   As a native speaker, I can confidently state that it's not intuitive
 to
   anyone ;)
  
   Phew, good to read that ;-)
  
  
   We unfortunately inherited these names from the Properties section
and
   the Fn::GetAtt function in cfn templates. It's even worse than that,
   because there's a whole category of... uh... things (DependsOn,
   DeletionPolicy, c.) that don't even have a name - I always have to
   resist the urge to call them 'attributes' too.
  
   So is this something we should try to get straight in HOT while we
 still
   have the flexibility?
 
  Y-yes. Provided that we can do it without making things *more*
  confusing, +1. That's hard though, because there are a number of places
  we have to refer to them, all with different audiences:
    - HOT users
    - cfn users
    - Existing developers
    - New developers
    - Plugin developers
 
  and using different names for the same thing can cause problems. My
test
  for this is: if you were helping a user on IRC debug an issue, is there
  a high chance you would spend 15 minutes talking past each other
because
  they misunderstand the terminology?

 Hm, good point. Seems like it would really cause more confusion than it
 helps. So back away from the general idea of renaming things that exist
 both in cfn and HOT.
 What we should try of course is to give new concepts that will only exist
 in HOT intuitive names.

 
   Regarding properties/attributes for example, to me I would call both
 just
   properties of a resource or component, and then I can write them or
 read
   them like:
  
   components:
      my_component:
        type: ...
        properties:
          my_prop: { get_property: [ other_component,
 other_component_prop ] }
  
      other_component:
        # ...
  
   I.e. you write property 'my_prop' of 'my_component' in its properties
   section, and you read property 'other_component_prop' of
 'other_component'
   using the get_property function.
   ... we can also call them attributes, but use one name, not two
 different
   names for the same thing.
 
  IMO inputs (Properties) and outputs (Fn::GetAtt) are different things
  (and they exist in different namespaces), so -1 for giving them the
same
  name.
 
  In an ideal world I'd like HOT to use something like get_output_data
(or
  maybe just get_data), but OTOH we have e.g. FnGetAtt() and
  attributes_schema baked in to the plugin API that we can't really
  change, so it seems likely to lead to developers and users adopting
  different terminology, or making things very difficult for new
  developers

Re: [openstack-dev] [Heat] A prototype for cross-vm synchronization and communication

2013-10-21 Thread Thomas Spatzier
Hi Lakshmi,

you mentioned an example in your original post, but I did not find it. Can
you add the example?

Lakshminaraya Renganarayana lren...@us.ibm.com wrote on 18.10.2013
20:57:43:
 From: Lakshminaraya Renganarayana lren...@us.ibm.com
 To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org,
 Date: 18.10.2013 21:01
 Subject: Re: [openstack-dev] [Heat] A prototype for cross-vm
 synchronization and communication

 Just wanted to add a couple of clarifications:

 1. the cross-vm dependences are captured via the read/writes of
 attributes in resources and in software components (described in
 metadata sections).

 2. these dependences are then realized via blocking-reads and writes
 to zookeeper, which realizes the cross-vm synchronization and
 communication of values between the resources.

 Thanks,
 LN


 Lakshminaraya Renganarayana/Watson/IBM@IBMUS wrote on 10/18/2013 02:45:01
PM:

  From: Lakshminaraya Renganarayana/Watson/IBM@IBMUS
  To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org
  Date: 10/18/2013 02:48 PM
  Subject: [openstack-dev] [Heat] A prototype for cross-vm
  synchronization and communication
 
  Hi,
 
  In the last Openstack Heat meeting there was good interest in
  proposals for cross-vm synchronization and communication and I had
  mentioned the prototype I have built. I had also promised that I
  will post an outline of the prototype ... Here it is. I might have
  missed some details, please feel free to ask / comment and I would
  be happy to explain more.
  ---
  Goal of the prototype: Enable cross-vm synchronization and
  communication using high-level declarative description (no wait-
  conditions) Use chef as the CM tool.
 
  Design rationale / choices of the prototype (note that these were
  made just for the prototype and I am not proposing them to be the
  choices for Heat/HOT):
 
  D1: No new construct in Heat template
  = use metadata sections
  D2: No extensions to core Heat engine
  = use a pre-processor that will produce a Heat template that the
  standard Heat engine can consume
  D3: Do not require chef recipes to be modified
  = use a convention of accessing inputs/outputs from chef node[][]
  = use ruby meta-programming to intercept reads/writes to node[][]
  forward values
  D4: Use a standard distributed coordinator (don't reinvent)
  = use zookeeper as a coordinator and as a global data space for
 communciation
 
  Overall, the flow is the following:
  1. User specifies a Heat template with details about software config
  and dependences in the metadata section of resources (see step S1
below).
  2. A pre-processor consumes this augmented heat template and
  produces another heat template with user-data sections with cloud-
  init scripts and also sets up a zookeeper instance with enough
  information to coordinate between the resources at runtime to
  realize the dependences and synchronization (see step S2)
  3. The generated heat template is fed into standard heat engine to
  deploy. After the VMs are created the cloud-init script kicks in.
  The cloud init script installs chef solo and then starts the
  execution of the roles specified in the metadata section. During
  this execution of the recipes the coordination is realized (see
  steps S2 and S3 below).
 
  Implementation scheme:
  S1. Use metadata section of each resource to describe  (see
 attached example)
  - a list of roles
  - inputs to and outputs from each role and their mapping to resource
  attrs (any attr)
  - convention: these inputs/outputs will be through chef node attrs node
[][]
 
  S2. Dependence analysis and cloud init script generation
 
  Dependence analysis:
  - resolve every reference that can be statically resolved using
  Heat's fucntions (this step just uses Heat's current dependence
  analysis -- Thanks to Zane Bitter for helping me understand this)
  - flag all unresolved references as values resolved at run-time at
  communicated via the coordinator
 
  Use cloud-init in user-data sections:
  - automatically generate a script that would bootstrap chef and will
  run the roles/recipes in the order specified in the metadata section
  - generate dependence info for zookeeper to coordinate at runtime
 
  S3. Coordinate synchronization and communication at run-time
  - intercept reads and writes to node[][]
  - if it is a remote read, get it from Zookeeper
  - execution will block till the value is available
  - if write is for a value required by a remote resource, write the
  value to Zookeeper
 
  The prototype is implemented in Python and Ruby is used for chef
  interception.
 
  There are alternatives for many of the choices I have made for
theprototype:
  - zookeeper can be replaced with any other service that provides a
  data space and distributed coordination
  - chef can be replaced by any other CM tool (a little bit of design
  / convention needed for other CM tools because of the interception
  used in the prototype to catch 

Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-16 Thread Thomas Spatzier
Hi Steve,

thanks a lot taking the effort to write all this down. I had a look at both
wiki pages and have some comments below. This is really from the top of my
head, and I guess I have to spend some more time thinking about it, but I
wanted to provide some feedback anyway.

On components vs. resources:
So the proposal says clearly that the resource concept is only used for
things that get accessed and managed via their APIs, i.e. services provided
by something external to Heat (nova, cinder, etc), while software is
different and therefore modeled as components, which is basically fine (and
I also suggested this in my initial proposal .. but was never quite sure).
Anyway, I think we also need some APIs to access software components (not
the actual installed software, but the provider managing it), so we can get
the state of a component, and probably also manage the state to do
meaningful orchestration. That would bring it close to the resource concept
again, or components (the providers) would have to provide some means for
getting access to state etc.

Why no processing of intrinsic functions in config block?
... that was actually a question that came up when I read this first, but
maybe is resolved by some text further down in the wiki. But I wanted to
ask for clarification. I thought having intrinsic function could be helpful
for passing parameters around, and also implying dependencies. A bit
further down some concept for parameter passing to the config providers is
introduced, and for filling the parameters, intrinsic functions can be
used. So do I get it right that this would enable the dependency building
and data passing?

Regarding pointer from a server's components section to components vs. a
hosted_on relationship:
The current proposal is in fact (or probably) isomorphic to the hosted_on
links from my earlier proposal. However, having pointers from the lower
layer (servers) to the upper layers (software) seems a bit odd to me. It
would be really nice to get clean decoupling of software and infrastructure
and not just the ability to copy and paste the components and then having
to define server resources specifically to point to components. The
ultimate goal would be to have app layer models and infrastructure models
(e.g. using the environments concept and provider resources) and some way
of binding app components to one or multiple servers per deployment (single
server in test, clustered in production).
Maybe some layer in between is necessary, because neither my earlier
hosted_on proposal nor the current proposal does that.

Why no depends_on (or just dependency) between components?
Ordering in components is ok, but I think it should be possible to express
dependencies between components across servers. Whether or not a
depends_on relationship is the right things to express this, or just a
more simple dependency notation can be discussed, but I think we need
something. In my approach I tried to come up with one section
(relationship) that is the place for specifying all sorts of linke,
dependency being one, just to come up with one extensible way of expressing
things.
Anyway, having the ability to manage dependencies by Heat seems necessary.
And I would not pass the ball completely to the other tools outside of
Heat. First of all, doing things in those other tools also gets complicated
(e.g. while chef is good on one server, doing synchronization across
servers can get ugly). And Heat has the ultimate knowledge about what
servers it created, their IP addresses etc, so it should lead
orchestration.

Regarding component_execution: async
Is this necessary? I think in any case, it should be possible to create
infrastructure resources (servers) in parallel. Then only the component
startup should be synchronized once the servers are up, and this should be
the default behavior. I think this actually related to the dependency topic
above. BTW, even component startup inside servers can be done in parallel
unless components have dependencies on each other, so doing a component
startup in the strict order as given in a list in the template is probably
not necessary.

Regarding the wait condition example:
I get the idea and it surely would work, but I think it is still
un-intuitive and we should about a more abstract declarative way for
expressing such use cases.

Regarding the native tool bootstrap config proposal:
I agree with other comments already made on this thread that the sheer
number of different config components seems to much. I guess for users it
will be hard to understand, which one to use when, what combination makes
sense, in which order they have to be combined etc. Especially, when things
are getting combined, my gut feeling is that the likelyhood of templates
breaking whenever some of the underlying implementation changes will
increase.

Steve Baker sba...@redhat.com wrote on 16.10.2013 00:48:53:
 From: Steve Baker sba...@redhat.com
 To: openstack-dev@lists.openstack.org,
 Date: 

Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-14 Thread Thomas Spatzier
Steven Dake sd...@redhat.com wrote on 11.10.2013 21:02:38:
 From: Steven Dake sd...@redhat.com
 To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org,
 Date: 11.10.2013 21:04
 Subject: Re: [openstack-dev] [Heat] HOT Software orchestration
 proposal for workflows

 On 10/11/2013 11:55 AM, Lakshminaraya Renganarayana wrote:
 Clint Byrum cl...@fewbar.com wrote on 10/11/2013 12:40:19 PM:

  From: Clint Byrum cl...@fewbar.com
  To: openstack-dev openstack-dev@lists.openstack.org
  Date: 10/11/2013 12:43 PM
  Subject: Re: [openstack-dev] [Heat] HOT Software orchestration
  proposal for workflows
 
   3. Ability to return arbitrary (JSON-compatible) data structure
 from config
   application and use attributes of that structure as an input for
other
   configs
 
  Note that I'd like to see more use cases specified for this ability.
The
  random string generator that Steve Baker has put up should handle most
  cases where you just need passwords. Generated key sharing might best
  be deferred to something like Barbican which does a lot more than Heat
  to try and keep your secrets safe.

 I had seen a deployment scenario that needed more than random string
 generator. It was during the deployment of a system that has
 clustered application servers, i.e., a cluster of application server
 nodes + a cluster manager node. The deployment progresses by all the
 VMs (cluster-manager and cluster-nodes) starting concurrently. Then
 the cluster-nodes wait for the cluster-manager to send them data
 (xml) to configure themselves. The cluster-manager after reading its
 own config file, generates config-data for each cluster-node and
 sends it to them.

 Is the config data per cluster node unique to each node?  If not:

I think Lakshmi's example (IBM WebSphere, right?) talks about a case where
the per cluster member info is unique per member, so the one fits all
approach does not work. In addition, I think there is a constraint that
members must join one by one and cannot join concurrently.


 Change deployment to following model:
 1. deploy cluster-manager as a resource with a waitcondition -
 passing the data using the cfn-signal  -d to send the xml blob
 2. have cluster nodes wait on wait condition in #1, using data from
 the cfn-signal

 If so, join the config data sent in cfn-signal and break it apart by
 the various cluster nodes in #2
 Thanks,
 LN


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-10 Thread Thomas Spatzier
Hi all,

Lakshminaraya Renganarayana lren...@us.ibm.com wrote on 10.10.2013
01:34:41:
 From: Lakshminaraya Renganarayana lren...@us.ibm.com
 To: Joshua Harlow harlo...@yahoo-inc.com,
 Cc: OpenStack Development Mailing List
openstack-dev@lists.openstack.org
 Date: 10.10.2013 01:37
 Subject: Re: [openstack-dev] [Heat] HOT Software orchestration
 proposal for workflows

 Hi Joshua,

 I agree that there is an element of taskflow in what I described.
 But, I am aiming for something much more lightweight which can be
 naturally blended with HOT constructs and Heat engine. To be a bit
 more specific, Heat already has dependencies and coordination
 mechanisms. So, I am aiming for may be just one additional construct
 in Heat/HOT and some logic in Heat that would support coordination.

First of all, the use case you presented in your earlier mail is really
good and illustrative. And I agree that there should be constructs in HOT
to declare those kinds of dependencies. So how to define this in HOT is one
work item.
How this gets implemented is another item, and yes, maybe this is something
that Heat can delegate to taskflow. Because if taskflow has those
capabilities, why re-implement it.


 Thanks,
 LN

 _
 Lakshminarayanan Renganarayana
 Research Staff Member
 IBM T.J. Watson Research Center
 http://researcher.ibm.com/person/us-lrengan


 Joshua Harlow harlo...@yahoo-inc.com wrote on 10/09/2013 03:55:00 PM:

  From: Joshua Harlow harlo...@yahoo-inc.com
  To: OpenStack Development Mailing List openstack-
  d...@lists.openstack.org, Lakshminaraya Renganarayana/Watson/IBM@IBMUS
  Date: 10/09/2013 03:55 PM
  Subject: Re: [openstack-dev] [Heat] HOT Software orchestration
  proposal for workflows
 
  Your example sounds a lot like what taskflow is build for doing.
 
  https://github.com/stackforge/taskflow/blob/master/taskflow/
  examples/calculate_in_parallel.py is a decent example.
 
  In that one, tasks are created and input/output dependencies are
  specified (provides, rebind, and the execute function arguments
itself).
 
  This is combined into the taskflow concept of a flow, one of those
  flows types is a dependency graph.
 
  Using a parallel engine (similar in concept to a heat engine) we can
  run all non-dependent tasks in parallel.
 
  An example that I just created that shows this (and shows it
  running) that closer matches your example.
 
  Program (this will work against the current taskflow codebase):
  http://paste.openstack.org/show/48156/
 
  Output @ http://paste.openstack.org/show/48157/
 
  -Josh
 
  From: Lakshminaraya Renganarayana lren...@us.ibm.com
  Reply-To: OpenStack Development Mailing List openstack-
  d...@lists.openstack.org
  Date: Wednesday, October 9, 2013 11:31 AM
  To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Heat] HOT Software orchestration
  proposal for workflows
 
  Steven Hardy sha...@redhat.com wrote on 10/09/2013 05:24:38 AM:
 
  
   So as has already been mentioned, Heat defines an internal workflow,
based
   on the declarative model defined in the template.
  
   The model should define dependencies, and Heat should convert those
   dependencies into a workflow internally.  IMO if the user also needs
to
   describe a workflow explicitly in the template, then we've probably
failed
   to provide the right template interfaces for describing
depenendencies.
 
  I agree with Steven here, models should define the dependencies and
Heat
  should realize/enforce them. An important design issue is granularity
at
  which dependencies are defined and enforced. I am aware of the
 wait-condition
  and signal constructs in Heat, but I find them a bit low-level as
  they are prone
  to the classic dead-lock and race condition problems.  I would like to
have
  higher level constructs that support finer-granularity dependences
which
  are needed for software orchestration. Reading through the
variousdisucssion
  on this topic in this mailing list, I see that many would like to have
such
  higher level constructs for coordination.
 
  In our experience with software orchestration using our own DSL
 and also with
  some extensions to Heat, we found that the granularity of VMs or
  Resources to be
  too coarse for defining dependencies for software orchestration. For
  example, consider
  a two VM app, with VMs vmA, vmB, and a set of software components
  (ai's and bi's)
  to be installed on them:
 
  vmA = base-vmA + a1 + a2 + a3
  vmB = base-vmB + b1 + b2 + b3
 
  let us say that software component b1 of vmB, requires a config
  value produced by
  software component a1 of vmA. How to declaratively model this
  dependence? Clearly,
  modeling a dependence between just base-vmA and base-vmB is not
  enough. However,
  defining a dependence between the whole of vmA and vmB is too
  coarse. It would be ideal
  to be able to define a dependence at the granularity of software
  

Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-09 Thread Thomas Spatzier
Excerpts from Clint Byrum's message

 From: Clint Byrum cl...@fewbar.com
 To: openstack-dev openstack-dev@lists.openstack.org,
 Date: 09.10.2013 03:54
 Subject: Re: [openstack-dev] [Heat] HOT Software orchestration
 proposal for workflows

 Excerpts from Stan Lagun's message of 2013-10-08 13:53:45 -0700:
  Hello,
 
 
  That is why it is necessary to have some central coordination service
which
  would handle deployment workflow and perform specific actions (create
VMs
  and other OpenStack resources, do something on that VM) on each stage
  according to that workflow. We think that Heat is the best place for
such
  service.
 

 I'm not so sure. Heat is part of the Orchestration program, not workflow.


I agree. HOT so far was thought to be a format for describing templates in
a structural, declaritive way. Adding workflows would stretch it quite a
bit. Maybe we should see what aspects make sense to be added to HOT, and
then how to do workflow like orchestration in a layer on top.

  Our idea is to extend HOT DSL by adding  workflow definition
capabilities
  as an explicit list of resources, components’ states and actions.
States
  may depend on each other so that you can reach state X only after
you’ve
  reached states Y and Z that the X depends on. The goal is from initial
  state to reach some final state “Deployed”.
 

We also would like to add some mechanisms to HOT for declaratively doing
software component orchestration in Heat, e.g. saying that one component
depends on another one, or needs input from another one once it has been
deployed etc. (I BTW started to write a wiki page, which is admittedly far
from complete, but I would be happy to work on it with interested folks -
https://wiki.openstack.org/wiki/Heat/Software-Configuration-Provider).
However, we must be careful not to make such features too complicated so
nobody will be able to use it any more. That said, I believe we could make
HOT cover some levels of complexity, but not all. And then maybe workflow
based orchestration on top is needed.


 Orchestration is not workflow, and HOT is an orchestration templating
 language, not a workflow language. Extending it would just complect two
 very different (though certainly related) tasks.

 I think the appropriate thing to do is actually to join up with the
 TaskFlow project and consider building it into a workflow service or
tools
 (it is just a library right now).

  There is such state graph for each of our deployment entities (service,
  VMs, other things). There is also an action that must be performed on
each
  state.

 Heat does its own translation of the orchestration template into a
 workflow right now, but we have already discussed using TaskFlow to
 break up the orchestration graph into distributable jobs. As we get more
 sophisticated on updates (rolling/canary for instance) we'll need to
 be able to reason about the process without having to glue all the
 pieces together.

  We propose to extend HOT DSL with workflow definition capabilities
where
  you can describe step by step instruction to install service and
properly
  handle errors on each step.
 
  We already have an experience in implementation of the DSL, workflow
  description and processing mechanism for complex deployments and
believe
  we’ll all benefit by re-using this experience and existing code, having
  properly discussed and agreed on abstraction layers and distribution of
  responsibilities between OS components. There is an idea of
implementing
  part of workflow processing mechanism as a part of Convection proposal,
  which would allow other OS projects to benefit by using this.
 
  We would like to discuss if such design could become a part of future
Heat
  version as well as other possible contributions from Murano team.
 

 Thanks really for thinking this through. Windows servers are not unique
in
 this regard. Puppet and chef are pretty decent at expressing single-node
 workflows but they are awkward for deferring control and resuming on
other
 nodes, so I do think there is a need for a general purpose distributed
 workflow definition tool.

 I'm not, however, convinced that extending HOT would yield a better
 experience for users. I'd prefer to see HOT have a well defined interface
 for where to defer to external workflows. Wait conditions are actually
 decent at that, and I'm sure with a little more thought we can make them
 more comfortable to use.

Good discussion to have, i.e. what extensions we would need in HOT for
making HOT alone more capable, and what we would need to hook it up with
other orchestration like workflows.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Autoscale and load balancers

2013-10-03 Thread Thomas Spatzier
Thomas Hervé the...@gmail.com wrote on 03.10.2013 09:59:02:

 From: Thomas Hervé the...@gmail.com
 To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org,
 Date: 03.10.2013 10:01
 Subject: Re: [openstack-dev] [Heat] Autoscale and load balancers


 On Wed, Oct 2, 2013 at 11:07 PM, Thomas
Spatzier thomas.spatz...@de.ibm.com
  wrote:

 A way to achieve the same behavior as you suggest but less verbose would
be
 to use relationships in HOT. We had some discussion about relationships
 earlier this year and in other contexts, but this would fit here very
well.
 And I think you express a similar idea on the wiki page you linked above.
 The model could look like:

 resources:
   server1:
     type: OS::Nova::Server
   server2:
     type: OS::Nova::Server
   loadbalancer:
     type: OS::Neutron::LoadBalancer
     # properties etc.
     relationships:
       - member: server1
       - member: server2

 From an implementation perspective, a relationship would be implemented
 similar to a resource, i.e. there is python code that implements all the
 behavior like modifying or notifying source or target of the
relationship.
 Only the look in the template is different. In the sample above, 'member'
 would be the type of relatioship and there would be a corresponding
 implementation. I actually wrote up some thoughts about possible notation
 for relationship in HOT on this wiki page:
 https://wiki.openstack.org/wiki/Heat/Software-Configuration-Provider

 We have such concepts in TOSCA and I think it could make sense to apply
 this here. So when evolving HOT we should think about extending a
template
 from just having resources to also having links (instead of association
 resources which are more verbose).

 Thanks for the input, I like the way it's structured. In this
 particular case, I think I'd like the relationships to be on server
 instead of having a list of relationships on the load balancer, but
 the idea remains the same.

Yes, good point. Relationships from the server to the LB make sense; it's
typically the one who wants to something (the server wants to be
load-balanced) to refer to the one offering the service.


 I didn't grasp completely the idea of components just yet, but it
 seems it would be a useful distinction with resources here, as we
 care more about the actual application here (the service running on
 port N on the server), rather than the bare server. It becomes the
 responsibility of the application to register with the load
 balancer, which it can do in a more informed manner (providing the
 weight in the pool for example). We just need a concise and explicit
 way to do that in a template.

If you refer to the components thing in the wiki page I mentioned, this has
been introduced (proposed) in relation to software orchestration. In the
first place, I thought it is not relevant for the loadbalancer use case but
the relationship concept is more important, and that was the reason I
pointed to that wiki page.
But actually you have a good point: it is the workload on-top of a server
that gets load balanced. So looks like both topics need to be looked at in
combination.


 --
 Thomas
  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Autoscale and load balancers

2013-10-02 Thread Thomas Spatzier
Thomas Hervé the...@gmail.com wrote on 02.10.2013 17:06:48:

 From: Thomas Hervé the...@gmail.com
 To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org,
 Date: 02.10.2013 17:09
 Subject: [openstack-dev] [Heat] Autoscale and load balancers

 Hi all,

 There is a small but important part of the autoscale design described in
 https://wiki.openstack.org/wiki/Heat/AutoScaling that we'd like to
 discuss to make sure everybody is on the same page. Namely, the
 relationship between an autoscaling group and a load balancer.

 In the current system, a group has references to load balancers, and
 signals them when the group changes, sending it the list of servers
 in the group. This creates an implicit interface between groups and
 load balancers that we didn't model well, and is awkward for third-
 party load balancers.

 In the new design, the suggestion is to have an intermediate
 resource which models the relationship between the load balancer and
 its members. This resource would be instantiated every time an
 instance is added (or removed) and would notify the load balancer.
 There are some more details here: https://wiki.openstack.org/wiki/
 Heat/AutoScaling#Load_Balancers.

I think overall this design makes sense. I have a few thoughts on the
additional resource below.


 Pros:
  * Remove load balancer specific code from the autoscale implementation.
  * Map nicely to the neutron lbaas code, which has a 'add-member' API.
  * Provide a more generic model for notifying systems of servers
allocation.

 Cons:
  * Make the template a bit more verbose.

 We have some ideas on how to alleviate the verbosity concern, for
 example by creating a LoadBalancerServer resource which would embed
 a Server and a LoadBalancerMember resource. But the template being
 Heat's main UI, it's important to get a good story here.

 Thoughts?

A way to achieve the same behavior as you suggest but less verbose would be
to use relationships in HOT. We had some discussion about relationships
earlier this year and in other contexts, but this would fit here very well.
And I think you express a similar idea on the wiki page you linked above.
The model could look like:

resources:
  server1:
type: OS::Nova::Server
  server2:
type: OS::Nova::Server
  loadbalancer:
type: OS::Neutron::LoadBalancer
# properties etc.
relationships:
  - member: server1
  - member: server2

From an implementation perspective, a relationship would be implemented
similar to a resource, i.e. there is python code that implements all the
behavior like modifying or notifying source or target of the relationship.
Only the look in the template is different. In the sample above, 'member'
would be the type of relatioship and there would be a corresponding
implementation. I actually wrote up some thoughts about possible notation
for relationship in HOT on this wiki page:
https://wiki.openstack.org/wiki/Heat/Software-Configuration-Provider

We have such concepts in TOSCA and I think it could make sense to apply
this here. So when evolving HOT we should think about extending a template
from just having resources to also having links (instead of association
resources which are more verbose).


 --
 Thomas___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse (now featuring software orchestration)

2013-10-01 Thread Thomas Spatzier
Clint Byrum cl...@fewbar.com wrote on 01.10.2013 08:31:44 - Excerpt:

 From: Clint Byrum cl...@fewbar.com
 To: openstack-dev openstack-dev@lists.openstack.org,
 Date: 01.10.2013 08:33
 Subject: Re: [openstack-dev] [heat] [scheduler] Bringing things
 together for Icehouse (now featuring software orchestration)

 Excerpts from Georgy Okrokvertskhov's message of 2013-09-30 11:44:26
-0700:
  Hi,
 
  I am working on the OpenStack project Murano which actually had to
solve
  the same problem with software level orchestration. Right now Murano
has a
  DSL language which allows you to define a workflow for a complex
service
  deployment.
  ...
 

 Hi!

 We've written some very basic tools to do server configuration for the
 OpenStack on OpenStack (TripleO) Deployment program. Hopefully we can
 avert you having to do any duplicate work and join forces.

 Note that configuring software and servers is not one job. The tools we
 have right now:

 os-collect-config - agent to collect data from config sources and trigger
 commands on changes. [1]

 os-refresh-config - run scripts to manage state during config changes
 (run-parts but more structured) [2]

 os-apply-config - write config files [3]

 [1] http://pypi.python.org/pypi/os-collect-config
 [2] http://pypi.python.org/pypi/os-refresh-config
 [3] http://pypi.python.org/pypi/os-apply-config

 We do not have a tool to do run-time software installation, because we
 are working on an image based deployment method (thus building images
 with diskimage-builder).  IMO, there are so many good tools already
 written that get this job done, doing one just for the sake of it being
 OpenStack native is a low priority.

 However, a minimal thing is needed for Heat users so they can use it to
 install those better tools for ongoing run-time configuration. cfn-init
 is actually pretty good. Its only crime other than being born of Amazon
 is that it also does a few other jobs, namely file writing and service
 management.

Right, there has been some discussion going on to find the right level of
software orchestration to go into Heat. As Clint said, there are a couple
of things out there already, like what the tripleO project has been doing.
And there are proposals / discussions going on to see who users could
include some level of software orchestration into HOT, e.g.

https://wiki.openstack.org/wiki/Heat/Software-Configuration-Provider

and how such constructs in HOT would align with assets already out there.
So Georgy's item is another one in that direction and it would be good to
find some common denominator.


 Anyway, before you run off and write an agent, I hope you will take a
look
 at os-collect-config and considering using it. For the command to run, I
 recommend os-refresh-config as you can have it run a progression of
config
 tools. For what to run in the configuration step of os-refresh-config,
 cfn-init would work, however there is a blueprint for a native interface
 that might be a bit different here:

 https://blueprints.launchpad.net/heat/+spec/native-tools-bootstrap-config

  When do you have a meeting for HOT software configuration discussion? I
  think we can add value here for Heat as we have already required
components
  for software orchestration with full integration with OpenStack and
  Keystone in particular.

 Heat meets at 2000 UTC every Wednesday.

 TripleO meets at 2000 UTC every Tuesday.

 Hope to see you there!

In addition, it looks like there will be some design sessions on that topic
at the HK summit, so if you happen to be there that could be another good
chance to talk.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse (now featuring software orchestration)

2013-09-25 Thread Thomas Spatzier
Clint Byrum cl...@fewbar.com wrote on 25.09.2013 08:46:57:
 From: Clint Byrum cl...@fewbar.com
 To: openstack-dev openstack-dev@lists.openstack.org,
 Date: 25.09.2013 08:48
 Subject: Re: [openstack-dev] [heat] [scheduler] Bringing things
 together for Icehouse (now featuring software orchestration)

 Excerpts from Mike Spreitzer's message of 2013-09-24 22:03:21 -0700:
  Let me elaborate a little on my thoughts about software orchestration,
and
  respond to the recent mails from Zane and Debo.  I have expanded my
  picture at
  https://docs.google.com/drawings/d/
 1Y_yyIpql5_cdC8116XrBHzn6GfP_g0NHTTG_W4o0R9U
  and added a companion picture at
  https://docs.google.com/drawings/d/1TCfNwzH_NBnx3bNz-
 GQQ1bRVgBpJdstpu0lH_TONw6g
  that shows an alternative.
 
  One of the things I see going on is discussion about better techniques
for
  software orchestration than are supported in plain CFN.  Plain CFN
allows
  any script you want in userdata, and prescription of certain additional

  setup elsewhere in cfn metadata.  But it is all mixed together and very

  concrete.  I think many contributors would like to see something with
more
  abstraction boundaries, not only within one template but also the
ability
  to have modular sources.
 

 Yes please. Orchestrate things, don't configure them. That is what
 configuration tools are for.

 There is a third stealth-objective that CFN has caused to linger in
 Heat. That is packaging cloud applications. By allowing the 100%
 concrete CFN template to stand alone, users can ship the template.

 IMO this marrying of software assembly, config, and orchestration is a
 concern unto itself, and best left outside of the core infrastructure
 orchestration system.

  I work closely with some colleagues who have a particular software
  orchestration technology they call Weaver.  It takes as input for one
  deployment not a single monolithic template but rather a collection of
  modules.  Like higher level constructs in programming languages, these
  have some independence and can be re-used in various combinations and
  ways.  Weaver has a compiler that weaves together the given modules to
  form a monolithic model.  In fact, the input is a modular Ruby program,

  and the Weaver compiler is essentially running that Ruby program; this
  program produces the monolithic model as a side effect.  Ruby is a
pretty
  good language in which to embed a domain-specific language, and my
  colleagues have done this.  The modular Weaver input mostly looks
  declarative, but you can use Ruby to reduce the verboseness of, e.g.,
  repetitive stuff --- as well as plain old modularity with abstraction.
We
  think the modular Weaver input is much more compact and better for
human
  reading and writing than plain old CFN.  This might not be obvious when

  you are doing the hello world example, but when you get to realistic
  examples it becomes clear.
 
  The Weaver input discusses infrastructure issues, in the rich way Debo
and
  I have been advocating, as well as software.  For this reason I
describe
  it as an integrated model (integrating software and infrastructure
  issues).  I hope for HOT to evolve to be similarly expressive to the
  monolithic integrated model produced by the Weaver compiler.

I don't fully get this idea of HOT consuming a monolithic model produced by
some compiler - be it Weaver or anything else.
I thought the goal was to develop HOT in a way that users can actually
write HOT, as opposed to having to use some compiler to produce some
useful model.
So wouldn't it make sense to make sure we add the right concepts to HOT to
make sure we are able to express what we want to express and have things
like composability, re-use, substitutability?

 

 Indeed, we're dealing with this very problem in TripleO right now. We
need
 to be able to compose templates that vary slightly for various reasons.

 A ruby DSL is not something I think is ever going to happen in
 OpenStack. But python has its advantages for DSL as well. I have been
 trying to use clever tricks in yaml for a while, but perhaps we should
 just move to a client-side python DSL that pushes the compiled yaml/json
 templates into the engine.

As said in my comment above, I would like to see us focusing on the
agreement of one language - HOT - instead of yet another DSL.
There are things out there that are well established (like chef or puppet),
and HOT should be able to efficiently and intuitively use those things and
orchestrate components built using those things.

Anyway, this might be off the track that was originally discussed in this
thread (i.e. holistic scheduling and so on) ...


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse (now featuring software orchestration)

2013-09-25 Thread Thomas Spatzier
Excerpt from Clint's mail on 25.09.2013 22:23:07:


 I think we already have some summit suggestions for discussing HOT,
 it would be good to come prepared with some visions for the future
 of HOT so that we can hash these things out, so I'd like to see this
 discussion continue.

Absolutely! Can those involved in the discussion check if this seems to be
covered in one of the session proposal me or others posted recently, and if
not raise another proposal? This is a good one to have.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Long-term, how do we make heat image/flavor name agnostic?

2013-07-18 Thread Thomas Spatzier
Steve Baker sba...@redhat.com wrote on 18.07.2013 00:00:40:
 From: Steve Baker sba...@redhat.com
 To: openstack-dev@lists.openstack.org,
 Date: 18.07.2013 00:08
 Subject: Re: [openstack-dev] [Heat] Long-term, how do we make heat
 image/flavor name agnostic?

 On 07/18/2013 08:53 AM, Gabriel Hurley wrote:
  I spent a bunch of time working with and understanding Heat in H2,
 and I find myself with one overarching question which I wonder if
 anyone's thought about or even answered already...
 
  At present, the CloudFormation template format is the first-class
 means of doing things in Heat. CloudFormation was created for
 Amazon, and Amazon has this massive convenience of having a (more or
 less) static list of images and flavors that they control. Therefore
 in CloudFormation everything is specified by a unique, specific name.
 
  OpenStack doesn't have this luxury. We have as many image and
 flavor names as we have deployments. Now, there are simple answers...
 
1. Name everything the way Amazon does, or
2. Alter your templates.
 
  But personally, I don't like either of these options. I think in
 the long term we win at platform/ecosystem by making it possible to
 take a template off the internet and having it work on *any* OpenStack
cloud.

+1 on making the selection based on requirements like platform, arch etc.
for images and minimum cpu, memory, ephemeral storage requirement for
flavors.
I think that would allow a template writer to express exactly what he needs
for his template to work reasonably without binding to any specific naming.
I think we can do something in this direction when we work on the
requirements feature in HOT.
In addition, wouldn't it make sense to enable lookup of flavors as a
starter (image could come later, because metadata could become more
complex) in the native instance resource Steven Dake is working on?

 
  To get there, we need a system that chooses images based on
 metadata (platform, architecture, distro) and flavors based on
 actual minimum requirements.
 
  Has anyone on the Heat team thought about this? Are there efforts
 in the works to alleviate this? Am I missing something obvious?
 
 Yes, each openstack cloud could have completely different flavors and
 images available. My current approach is to not have a Mappings section
 at all and just specify the flavor and image on launch, ie:

 Parameters:
   KeyName:
 Type: String
   InstanceType:
 Type: String
   ImageId:
 Type: String
 ...
 Resources:
   SmokeServer:
 Type: AWS::EC2::Instance
 Properties:
   ImageId: {Ref: ImageId}
   InstanceType: {Ref: InstanceType}
   KeyName: {Ref: KeyName}
 ...

While this would solve the problem of binding to specific names, it could
lead to binding to something that is not in line with what the template
writer wanted to have so the template works fine. E.g. how can the deployer
know what size the instances should have so the stack performs well without
having minimum requirements expressed (see my earlier comment above).


 InstanceType and ImageId could even be specified in the environment file
 that is specified on launch, so they don't need to be specified in the
 launch command, ie env.yaml:
 parameters:
   KeyName: heat_key
   InstanceType: m1.micro
   ImageId: ubuntu-vm-heat-cfntools-tempest

 heat stack-create mystack -e env.yaml --template-file=mytemplate.yaml

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Does it make sense to have a resource-create API?

2013-06-19 Thread Thomas Spatzier
Angus Salkeld asalk...@redhat.com wrote on 19.06.2013 03:09:34:

 From: Angus Salkeld asalk...@redhat.com
 To: openstack-dev@lists.openstack.org,
 Date: 19.06.2013 03:19
 Subject: Re: [openstack-dev] [Heat] Does it make sense to have a
 resource-create API?

 On 18/06/13 23:32 +, Adrian Otto wrote:
 Yes. I think having a POST method in the API makes perfect sense.
 Assuming we reach agreement on that, the next question that comes up is:

 Err, I am not convinced.

 Before that, I think it's worth highlighting the different
 proposals/requirements here:

 1) heat needs a way to setup an autoscaling group
 2) autoscaling needs a way of telling heat to scale up/down
 3) People might want to integrate other orchestration engines with Heat

 So one by one:
 1) autoscaling has a post api that the Heat autoscaling group resource
 posts to, Heat will provide a webhook in the event of a scaling
action.
 2) when autoscaling determines that there should be a scaling action
 it calls the webhook. The reason I suggest a webhook is in-instance
 applications might want to scale their applications (like
 openshift-gears) - so don't assume a heat endpoint.
 Also we can have this as an action not a resource-create.
 PUT /$tenant/stacks/$stack/resources/autoscale-group
 (with an action of scale-up/down)

 I think we should discourage (make it impossible) for users from modifing
 stacks outside of stack-update.

 To me one of the most powerful and apealing things of Heat is the
 ability to reproducibly re-create a stack from a template. This
 new public API is going to make this difficult.

 Also creating a resource is not a trivial issue, the user would have
 create the resources in the correct order (with correct inter-resource
 references etc..) mostly throwing away the point of orchestation in
 the first place. If you are doing this you may as well talk directly
 to nova/cinder/networking directly.

I agree on all of the above. I've read other responses to this thread and
do not yet get why out of band creation of resources should be allowed in
Heat. Updates might be ok, but I am not convinced on the create case for
the reasons Angus brought up. Heat is doing orchestration based on
templates, so bypassing this takes the orchestration away.
Sure, this could be handy in some cases, but once you open up an API like
this, a lot of nasty things could start to evolve.

If am following this correctly, it all started with an autoscaling
discussion that raised the question for a create resource API. So why isn't
autoscaling like any other resource (an instance, a swift bucket, etc.) and
if someone likes to create it out of band, go ahead and just talk to the
respective service (like you would do with nova for an instance). If you
want to do it as part of orchestration, include the autoscaling resource in
a Heat template and use Heat.


 
 How to do you modify resources that have been created with a POST?
 
 You mention HTTP PUT as an answer to that. Unfortunately PUT is
 only really useful for doing a full resource replacement, not just
 tweaking something that's already there. For that, you really want
 HTTP PATCH (http://tools.ietf.org/html/rfc5789). You can make this
 really elegant for JSON with JSON Patch (http://tools.ietf.org/html/
 draft-ietf-appsawg-json-patch-10).
 
 We should note that offering API methods to adjust a stack (aka:
 assembly) means that there will be a divergence between what's
 described in the original template, and the actual running state of
 the stack/assembly created by the template, well beyond the results
 of an autoscale policy. In fact, it would be possible to build a
 stack/assembly with no template at all, if the right API methods are
 present. There are good use cases for this, particularly for higher
 level compatibility layers where it would be awkward to generate
 permutations of templates to immediately feed into an API, rather
 than just use an API method for adjusting the stack/assembly in
 place. it would be much more elegant, for example, to implement a
 CAMP implementation on top of Heat if Heat had a REST API for
 creating and managing individual resources within a stack/assembly.
 This same argument applies to integrating any other orchestration or
 configuration management system with Heat.

 So this is 3):
 As I was alluding to above, if you are integrating at this level what
 value is Heat providing to you? If you are just using it for resource
 create this is a thin layer over the python clients. I'd suggest just
 using the resources as python plugins. I think exposing this would
 bring more harm than good.

 -Angus

 More comments in-line:
 
 On Jun 18, 2013, at 3:44 PM, Christopher Armstrong
 chris.armstr...@rackspace.com
  wrote:
 
  tl;dr POST /$tenant/stacks/$stack/resources/ ?
 
 Yes.
 
  == background ==
 
  While thinking about the Autoscaling API, Thomas Hervé and I had the
  following consideration:
 
  - autoscaling is implemented as