On 19/11/13 19:14, Christopher Armstrong wrote:
On Mon, Nov 18, 2013 at 5:57 AM, Zane Bitter <zbit...@redhat.com
<mailto:zbit...@redhat.com>> wrote:
On 16/11/13 11:15, Angus Salkeld wrote:
On 15/11/13 08:46 -0600, Christopher Armstrong wrote:
On Fri, Nov 15, 2013 at 3:57 AM, Zane Bitter
<zbit...@redhat.com <mailto:zbit...@redhat.com>> wrote:
On 15/11/13 02:48, Christopher Armstrong wrote:
On Thu, Nov 14, 2013 at 5:40 PM, Angus Salkeld
<asalk...@redhat.com <mailto:asalk...@redhat.com>
<mailto:asalk...@redhat.com
<mailto:asalk...@redhat.com>>> wrote:
On 14/11/13 10:19 -0600, Christopher Armstrong
wrote:
http://docs.heatautoscale.__ap__iary.io/
<http://apiary.io/>
<http://docs.heatautoscale.__apiary.io/
<http://docs.heatautoscale.apiary.io/>>
I've thrown together a rough sketch of the
proposed API for
autoscaling.
It's written in API-Blueprint format (which
is a simple subset
of Markdown)
and provides schemas for inputs and outputs
using JSON-Schema.
The source
document is currently at
https://github.com/radix/heat/____raw/as-api-spike/
<https://github.com/radix/heat/__raw/as-api-spike/>
autoscaling.__apibp
<https://github.com/radix/__heat/raw/as-api-spike/__autoscaling.apibp
<https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp>
>
Things we still need to figure out:
- how to scope projects/domains. put them
in the URL? get them
from the
token?
- how webhooks are done (though this
shouldn't affect the API
too much;
they're basically just opaque)
Please read and comment :)
Hi Chistopher
In the group create object you have 'resources'.
Can you explain what you expect in there? I
thought we talked at
summit about have a unit of scaling as a nested
stack.
The thinking here was:
- this makes the new config stuff easier to
scale (config get
applied
 per scaling stack)
- you can potentially place notification
resources in the scaling
 stack (think marconi message resource -
on-create it sends a
 message)
- no need for a launchconfig
- you can place a LoadbalancerMember resource
in the scaling stack
 that triggers the loadbalancer to add/remove
it from the lb.
I guess what I am saying is I'd expect an api
to a nested stack.
Well, what I'm thinking now is that instead of
"resources" (a
mapping of
resources), just have "resource", which can be the
template definition
for a single resource. This would then allow the
user to specify a
Stack
resource if they want to provide multiple resources.
How does that
sound?
My thought was this (digging into the implementation
here a bit):
- Basically, the autoscaling code works as it does now:
creates a
template
containing OS::Nova::Server resources (changed from
AWS::EC2::Instance),
with the properties obtained from the LaunchConfig, and
creates a
stack in
Heat.
- LaunchConfig can now contain any properties you like
(I'm not 100%
sure
about this one*).
- The user optionally supplies a template. If the
template is
supplied, it
is passed to Heat and set in the environment as the
provider for the
OS::Nova::Server resource.
I don't like the idea of binding to OS::Nova::Server
specifically for
autoscaling. I'd rather have the ability to scale *any*
resource,
including
nested stacks or custom resources. It seems like jumping
through hoops to
big +1 here, autoscaling should not even know what it is
scaling, just
some resource. solum might want to scale all sorts of non-server
resources (and other users).
I'm surprised by the negative reaction to what I suggested, which is
a completely standard use of provider templates. Allowing a
user-defined stack of resources to stand in for an unrelated
resource type is the entire point of providers. Everyone says that
it's a great feature, but if you try to use it for something they
call it a "hack". Strange.
To clarify this position (which I already did in IRC), replacing one
concrete resource with another that means something in a completely
different domain is a hack -- say, replacing "server" with "group of
related resources". However, replacing OS::Nova::Server with something
which still does something very much like creating a server is
reasonable -- e.g., using a different API like one for creating
containers or using a different cloud provider's API.
Sure, but at the end of the day it's just a name that is used internally
and which a user would struggle to even find referenced anywhere (I
think if they look at the resources created by the autoscaling template
it *might* show up). The name is completely immaterial to the idea, as
demonstrated below where I did a straight string substitution (1 line in
the environment) for a better name and nothing changed.
So, allow me to make a slight modification to my proposal:
- The autoscaling service manages a template containing
OS::Heat::ScaledResource resources. This is an imaginary resource
type that is not backed by a plugin in Heat.
- If no template is supplied by the user, the environment declares
another resource plugin as the provider for OS::Heat::ScaledResource
(by default it would be OS::Nova::Server, but this should probably
be configurable by the deployer... so if you had a region full of
Docker containers and no Nova servers, you could set it to
OS::Docker::Container or something).
- If a provider template is supplied by the user, it would be
specified as the provider in the environment file.
This, I hope, demonstrates that autoscaling needs no knowledge
whatsoever about what it is scaling to use this approach.
It'd be interesting to see some examples, I think. I'll provide some
examples of my proposals, with the following caveats:
Excellent idea, thanks :)
- I'm assuming a separation of launch configuration from scaling group,
as you proposed -- I don't really have a problem with this.
- I'm also writing these examples with the plural "resources" parameter,
which there has been some bikeshedding around - I believe the structure
can be the same whether we go with singular, plural, or even
whole-template-as-a-string.
# trivial example: scaling a single server
POST /launch_configs
{
"name": "my-launch-config",
"resources": {
"my-server": {
"type": "OS::Nova::Server",
"properties": {
"image": "my-image",
"flavor": "my-flavor", # etc...
}
}
}
}
This case would be simpler with my proposal, assuming we allow a default:
POST /launch_configs
{
"name": "my-launch-config",
"parameters": {
"image": "my-image",
"flavor": "my-flavor", # etc...
}
}
If we don't allow a default it might be something more like:
POST /launch_configs
{
"name": "my-launch-config",
"parameters": {
"image": "my-image",
"flavor": "my-flavor", # etc...
},
"provider_template_uri":
"http://heat.example.com/<tenant_id>/resources_types/OS::Nova::Server/template"
}
POST /groups
{
"name": "group-name",
"launch_config": "my-launch-config",
"min_size": 0,
"max_size": 0,
}
This would be the same.
(and then, the user would continue on to create a policy that scales the
group, etc)
# complex example: scaling a server with an attached volume
POST /launch_configs
{
"name": "my-launch-config",
"resources": {
"my-volume": {
"type": "OS::Cinder::Volume",
"properties": {
# volume properties...
}
},
"my-server": {
"type": "OS::Nova::Server",
"properties": {
"image": "my-image",
"flavor": "my-flavor", # etc...
}
},
"my-volume-attachment": {
"type": "OS::Cinder::VolumeAttachment",
"properties": {
"volume_id": {"get_resource": "my-volume"},
"instance_uuid": {"get_resource": "my-server"},
"mountpoint": "/mnt/volume"
}
}
}
}
This appears slightly more complex on the surface; I'll explain why in a
second.
POST /launch_configs
{
"name": "my-launch-config",
"parameters": {
"image": "my-image",
"flavor": "my-flavor", # etc...
}
"provider_template": {
"hot_format_version": "some random date",
"parameters" {
"image_name": {
"type": "string"
},
"flavor": {
"type": "string"
} # &c. ...
},
"resources" {
"my-volume": {
"type": "OS::Cinder::Volume",
"properties": {
# volume properties...
}
},
"my-server": {
"type": "OS::Nova::Server",
"properties": {
"image": {"get_param": "image_name"},
"flavor": {"get_param": "flavor"}, # etc...
}
},
"my-volume-attachment": {
"type": "OS::Cinder::VolumeAttachment",
"properties": {
"volume_id": {"get_resource": "my-volume"},
"instance_uuid": {"get_resource": "my-server"},
"mountpoint": "/mnt/volume"
}
}
},
"outputs" {
"public_ip_address": {
"Value": {"get_attr": ["my-server",
"public_ip_address"]} # &c. ...
}
}
}
(BTW the template could just as easily be included in the group rather
than the launch config. If we put it here we can validate the parameters
though.)
There are a number of advantages to including the whole template, rather
than a resource snippet:
- Templates are versioned!
- Templates accept parameters
- Templates can provide outputs - we'll need these when we go to do
notifications (e.g. to load balancers).
The obvious downside is there's a lot of fiddly stuff to include in the
template (hooking up the parameters and outputs), but this is almost
entirely mitigated by the fact that the user can get a template, ready
built with the server hooked up, from the API by hitting
/resource_types/OS::Nova::Server/template and just edit in the Volume
and VolumeAttachment. (For a different example, they could of course
begin with a different resource type - the launch config accepts any
keys for parameters.) To the extent that this encourages people to write
templates where the outputs are actually supplied, it will help reduce
the number of people complaining their load balancers aren't forwarding
any traffic because they didn't surface the IP addresses.
(and so on, creating the group and policies in the same way).
ditto.
Can you please provide an example of your proposal for the same use
cases? Please indicate how you'd specify the custom properties for each
resource and how you specify the provider template in the API.
As you can see, it's not really different, just an implementation
strategy where all the edge cases have already been worked out, and all
the parts already exist.
cheers,
Zane.
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev