I completely agree with Georgy, but you raised some questions about Heat that I want to answer in the interests of spreading knowledge about how Heat works. A heavily-snipped response follows...

On 21/03/14 05:11, Stan Lagun wrote:
3. Despite HOT being more secure on the surface it is not necessary so
in reality. There is a Python class behind each entry in resources
section of HOT template. That Python code is run with root privileges
and not guaranteed to be safe. People make mistakes, forget to validate
parameters, make incorrect assumptions etc. Even if the code is proven
to be secure every single commit can introduce security breach. And no
testing system can detect this.

Quite right, I should acknowledge that it would be crazy to assume that HOT is secure just because it is not a programming language, and I do not make that assumption. (Indeed, YAML itself has been the subject of many security problems, though afaik not in the safe mode that we use in Heat.) Thanks for pointing out that I was not clear.

    The operator can install whatever plugins they want.

They do but that is a bad solution. The reason is that plugins can
introduce additional resource types but they cannot modify existing
code. Most of the time cloud operators need to customize existing
resources' logic for their needs rather then rewriting it from scratch.
And they want their changes to be opaque to end-users. Imagine that
cloud operator need thats to get permission from his proprietary quota
management system for each VM spawned. If he would create custom
MyInstance resource type end-users could bypass it by using standard
Instance resource rather than custom one. Patching existing Python code
is not good in that then operator need to maintain his private fork of
the Heat and have troubles with CD, upgrades to newer versions etc.

It's not as bad as you think. All of the things you mentioned were explicit design goals of the plugin system. If you install a plug-in resource with the same type as a built-in resource then it replaces the built-in one. And of course you can inherit from the existing plugin to customise it.

So in this example, the operator would create a plugin like this:

  from heat.engine.resources import server
  from my.package import do_my_proprietary_quota_thing

  class MyServer(server.Server):
      def handle_create(self):
          do_my_proprietary_quota_thing()
          return super(MyServer, self).handle_create()

  def resource_mapping():
      return {'OS::Nova::Server': MyServer}

and drop it in /usr/lib/heat. As you can see, this is a simple customisation (10 lines of code), completely opaque to end users (OS::Nova::Server is replaced), and highly unlikely to be broken by any changes in Heat (we consider the public APIs of heat.engine.resource.Resource as a contract with existing plugins that we can't break, at least without a lot of notice).

(I'm ignoring here that if this is needed for _every_ server, it makes no sense to do it in Heat, unless you don't expose the Nova API to users at all.)

Besides plugin system is not secure because plugins run with the
privileges of Heat engine and while I may trust Heat developers
(community) but not necessary trust 3rd party proprietary plugin.

I'm not sure who 'I' means in this context? As an end-user, you have no way of auditing what code your cloud provider is running in general.


        What
        if he wants that auto-scaling would be based on input from his
        existing
        Nagios infrastructure rather then Ceilometer?


    This is supported already in autoscaling. Ceilometer just hits a URL
    for an alarm, but you don't have to configure it this way. Anything
    can hit the URL.

    And this is a good example for our general approach - we provide a
    way that works using built-in OpenStack services and a hook that
    allows you to customise it with your own service, running on your
    own machine (whether that be an actual machine or an OpenStack
    Compute server). What we *don't* do is provide a way to upload your
    own code that we then execute for you as some sort of secondary
    Compute service.


1. Anything can hit the URL but it is auto-scaling resource that creates
Ceilometer alarms. And what should I do to make it create Nagios alarms
for example?

That's incorrect, autoscaling doesn't create any alarms. You create an alarm explicitly using the Ceilometer API, or using an OS::Ceilometer::Alarm resource in Heat. Or not, if you want to use some other source for alarms. You connect them together by getting the alarm_url attribute from the autoscaling policy resource and passing it to the Ceilometer alarm, but you could also allow any alarm source you care to use to hit that URL.

      [Ceilometer]                      [Heat]
  Metrics ---> Alarm - - - - -> Policy ---> Scaling Group
                         ^
                      (webhook)

A second option is that you can also feed metrics to the Ceilometer API yourself. In many cases this may be what you want; the hook exists more so that you can implement more complex policies than the ones that autoscaling supports natively. (Note that we *could* have defined a new language for implementing arbitrarily-complex policies and executing them in the autoscaling service, but instead we just added this hook.)

      [Ceilometer]                                [Heat]
  Metrics ---> Alarm - - ->|  Ext.  | - - -> Policy |
                           | Policy |               |---> Scaling Group
                           | Engine | - - -> Policy |
  Metrics ---> Alarm ----->|        |
        [Nagios]

2. Your approach has its cons and pros. I do acknowledge and respect
strong sides of such decision. But it has its limitations.

Yes, I accept that it has limitations. And I even acknowledge that some people will see that as a Bad Thing ;)


    Everything is a combination of existing resources, because the set
    of existing resources is the set of things which the operator
    provides as-a-Service. The set of things that the operator provides
    as a service plus the set of things that you can implement yourself
    on your own server (virtual or not) covers the entire universe of
    things. What you appear to be suggesting is that OpenStack must
    provide *Everything*-as-a-Service by allowing users to write their
    own services and have the operator execute them as-a-Service. This
    would be a breathtakingly ambitious undertaking, and I don't mean
    that in a good way.


1. By existing resources I mean resource types that are available in
Heat. If I need to talk to Marconi during deployment but there is no
Marconi plugin yet available in my Heat installation or use the latest
feature introduced by yesterdays commit to Nova I'm in trouble.

This is the cloud provider's responsibility to deal with. If your cloud provider provides a Message Queue service but doesn't provide a Heat plugin for it, you vote with your feet and find one that does. OpenStack is Open Source, and providers will be subject to these kinds of competitive pressures by design.

(It has actually been suggested to require incubated projects to come up with Heat plugins, although not for this reason, but sadly that has been torpedoed for now by politicking on an unrelated topic.)

2. If you can implement something on user-land resources you can do the
same with Murano. It is not that Murano enforces to do it on server side.

3. Not everything can be done from VMs. There are use cases when you
need to access cloud operator's proprietary services and even hardware
components that just cannot be done from user-land or they need to be
done prior to VM spawn.

If a cloud provider provides proprietary services, it's up to the cloud provider to provide you with a way to access them. They have every incentive to do so.


1. As for autoscaling last time I checked (it may be fixed since then)
Heat's LoadBalancer spawned HAProxy on Fedora VM with hardcoded image
name and hardcoded nested stack template. This is not what I would call
highly-customizable solution. It is hard to imagine a generic enough

Yah, it's terrible :D

The current implementation is a kind of template/plugin hybrid, simply because it existed long before the provider feature we've been discussing. It needs to be reimplemented in a single template file where operators can easily modify it according to their needs. That said, there are no remaining known technical barriers to doing this.

The good news is that with the provider templates feature, you can use your own definition (defined in a Heat template) for this, or indeed any other, resource type.

autoscaling implementation that could work with all possible
health-monitoring systems, load-balancers and would be as useful for
scaling RabbitMQ, MongoDB and MS SQL Server clusters as it is for web

This is absolutely the goal for autoscaling in Heat. Notifying the load balancer is the hardest part, but some ideas have been proposed (including the same sort of notification system I'm suggesting for the workflow hooks).

farms. Surely you can have your own implementation on user-land
resources but why have you chose it to be Heat resource and not sample
HOT template in extras repository?

Historical reasons. The implementation pre-dates provider templates by about a year, and exists primarily as a way of demonstrating our ability to launch existing CloudFormation templates. The recommended way, of course, is to use the Neutron load balancer resource, now that that exists. If you want to test on a cloud that doesn't support it, then that's a good time to use a provider template to replace it.

Besides we both want Heat and Murano
to be really useful and not just chef/puppet bootstappers :)

FWIW I personally am completely fine with Heat being 'just' a Puppet/Chef bootstrapper. Heat's goal is to orchestrate *OpenStack* resources (i.e. infrastructure), not to replace configuration management. If replacing configuration management were the goal, I don't think it would belong in OpenStack, since that is neither something that should be tied to OpenStack (configuration management is useful everywhere), nor something OpenStack should be tied to (you should be able to use *any* configuration management system with OpenStack).


You know that there is a de facto standard DSL exists for workflow
definitions. It is called BPEL and it is Turing-complete and as
expressive as MuranoPL. There are alternative standards like BPMN, YAWL
and AFAIKS they are all Turing-complete.

My feelings about those are not fit for publication ;)

So what makes you feel like
Mistral DSL doesn't need to be so?

Amazon SWF.

cheers,
Zane.

_______________________________________________
OpenStack-dev mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to