[openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-07-22 Thread Alex Glikson
Dear all,

Following the initial discussions at the last design summit, we have 
published the design [2] and the first take on the implementation [3] of 
the blueprint adding support for multiple active scheduler 
policies/drivers [1]. 
In a nutshell, the idea is to allow overriding the 'default' scheduler 
configuration parameters (driver, filters, their configuration parameters, 
etc) for particular host aggregates. The 'policies' are introduced as 
sections in nova.conf, and each host aggregate can have a key-value 
specifying the policy (by name).

Comments on design or implementation are welcome!

Thanks,
Alex


[1] https://blueprints.launchpad.net/nova/+spec/multiple-scheduler-drivers
[2] https://wiki.openstack.org/wiki/Nova/MultipleSchedulerPolicies
[3] https://review.openstack.org/#/c/37407/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-10-21 Thread Khanh-Toan Tran
I'm not sure it's a good moment for this but I would like to re-open the topic 
a little bit.

Just a small idea: is it OK if we use a file, or a database as a central point 
to store the policies 
and their associated aggregates? The Scheduler reads it first, then calls the 
scheduler drivers 
listed in the policy file for the associated aggregates. In this case we can 
get the list of 
filters and targeted aggregates before actually running the filters. Thus we 
avoid the loop 
filter -> aggregate -> policy -> filter ->.

Moreover, admin does not need to populate the flavors' extra_specs or associate 
them with the
aggregates, effectively avoiding defining two different policies in 2 flavors 
whose VMs are
eventually hosted in a same aggregate.

The downside of this method is that it is not API-accessible: at the current 
state we do not have
a policy management system. I would like a policy management system with REST 
API, but still, it
is not worse than using nova config.

Best regards,

Toan

Alex Glikson GLIKSON at il.ibm.com 
Wed Aug 21 17:25:30 UTC 2013
Just to update those who are interested in this feature but were not able 
to follow the recent commits, we made good progress converging towards a 
simplified design, based on combination of aggregates and flavors (both of 
which are API-drvien), addressing some of the concerns expressed in this 
thread (at least to certain extent).
The current design and possible usage scenario has been updated at 
https://wiki.openstack.org/wiki/Nova/MultipleSchedulerPolicies 
Comments are welcome (as well as code reviews at 
https://review.openstack.org/#/c/37407/).

Thanks, 
Alex




From:   Joe Gordon 
To: OpenStack Development Mailing List 
, 
Date:   27/07/2013 01:22 AM
Subject:    Re: [openstack-dev] [Nova] support for multiple active 
scheduler   policies/drivers






On Wed, Jul 24, 2013 at 6:18 PM, Alex Glikson  wrote:
Russell Bryant  wrote on 24/07/2013 07:14:27 PM:

> 
> I really like your point about not needing to set things up via a config
> file.  That's fairly limiting since you can't change it on the fly via
> the API.


True. As I pointed out in another response, the ultimate goal would be to 
have policies as 'first class citizens' in Nova, including a DB table, 
API, etc. Maybe even a separate policy service? But in the meantime, it 
seems that the approach with config file is a reasonable compromise in 
terms of usability, consistency and simplicity. 

I do like your idea of making policies first class citizens in Nova, but I 
am not sure doing this in nova is enough.  Wouldn't we need similar things 
in Cinder and Neutron?Unfortunately this does tie into how to do good 
scheduling across multiple services, which is another rabbit hole all 
together.

I don't like the idea of putting more logic in the config file, as it is 
the config files are already too complex, making running any OpenStack 
deployment  require some config file templating and some metadata magic 
(like heat).   I would prefer to keep things like this in aggregates, or 
something else with a REST API.  So why not build a tool on top of 
aggregates to push the appropriate metadata into the aggregates.  This 
will give you a central point to manage policies, that can easily be 
updated on the fly (unlike config files).   In the long run I am 
interested in seeing OpenStack itself have a strong solution for for 
policies as a first class citizen, but I am not sure if your proposal is 
the best first step to do that.


 

Regards, 
Alex 

> -- 
> Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-07-22 Thread Russell Bryant
On 07/22/2013 05:15 PM, Alex Glikson wrote:
> Dear all,
> 
> Following the initial discussions at the last design summit, we have
> published the design [2] and the first take on the implementation [3] of
> the blueprint adding support for multiple active scheduler
> policies/drivers [1].
> In a nutshell, the idea is to allow overriding the 'default' scheduler
> configuration parameters (driver, filters, their configuration
> parameters, etc) for particular host aggregates. The 'policies' are
> introduced as sections in nova.conf, and each host aggregate can have a
> key-value specifying the policy (by name).
> 
> Comments on design or implementation are welcome!
> 
> Thanks,
> Alex
> 
> 
> [1] https://blueprints.launchpad.net/nova/+spec/multiple-scheduler-drivers
> [2] https://wiki.openstack.org/wiki/Nova/MultipleSchedulerPolicies
> [3] https://review.openstack.org/#/c/37407/

Thanks for bringing this up.  I do have some comments.

The current design shows 2 different use cases for how a scheduling
policy would be chosen.

#1 - policy associated with a host aggregate

This seems very odd to me.  Scheduling policy is what chooses hosts, so
having a subset of hosts specify which policy to use seems backwards.

#2 - via a scheduler hint

It also seems odd to have the user specifying scheduling policy.  This
seems like something that should be completely hidden from the user.

How about just making the scheduling policy choice as simple as an item
in the flavor extra specs?

The design also shows some example configuration.  It shows a global set
of enabled scheduler filters, and then policy specific tweaks of filter
config (CPU allocation ratio in the example).  I would expect to be able
to set a scheduling policy specific list of scheduler filters and
weights, as well.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-07-22 Thread Joe Gordon
On Mon, Jul 22, 2013 at 3:04 PM, Russell Bryant  wrote:

> On 07/22/2013 05:15 PM, Alex Glikson wrote:
> > Dear all,
> >
> > Following the initial discussions at the last design summit, we have
> > published the design [2] and the first take on the implementation [3] of
> > the blueprint adding support for multiple active scheduler
> > policies/drivers [1].
> > In a nutshell, the idea is to allow overriding the 'default' scheduler
> > configuration parameters (driver, filters, their configuration
> > parameters, etc) for particular host aggregates. The 'policies' are
> > introduced as sections in nova.conf, and each host aggregate can have a
> > key-value specifying the policy (by name).
> >
> > Comments on design or implementation are welcome!
> >
> > Thanks,
> > Alex
> >
> >
> > [1]
> https://blueprints.launchpad.net/nova/+spec/multiple-scheduler-drivers
> > [2] https://wiki.openstack.org/wiki/Nova/MultipleSchedulerPolicies
> > [3] https://review.openstack.org/#/c/37407/
>
> Thanks for bringing this up.  I do have some comments.
>
> The current design shows 2 different use cases for how a scheduling
> policy would be chosen.
>
> #1 - policy associated with a host aggregate
>
> This seems very odd to me.  Scheduling policy is what chooses hosts, so
> having a subset of hosts specify which policy to use seems backwards.
>
> #2 - via a scheduler hint
>
> It also seems odd to have the user specifying scheduling policy.  This
> seems like something that should be completely hidden from the user.
>
> How about just making the scheduling policy choice as simple as an item
> in the flavor extra specs?
>

++, IMHO we already reveal too much scheduling information to the user via
are current set of scheduler hints.


>
> The design also shows some example configuration.  It shows a global set
> of enabled scheduler filters, and then policy specific tweaks of filter
> config (CPU allocation ratio in the example).  I would expect to be able
> to set a scheduling policy specific list of scheduler filters and
> weights, as well.
>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-07-22 Thread Alex Glikson
Russell Bryant  wrote on 23/07/2013 01:04:24 AM:
> > [1] 
https://blueprints.launchpad.net/nova/+spec/multiple-scheduler-drivers
> > [2] https://wiki.openstack.org/wiki/Nova/MultipleSchedulerPolicies
> > [3] https://review.openstack.org/#/c/37407/
> 
> Thanks for bringing this up.  I do have some comments.

Thanks for the comments. See below.

> 
> The current design shows 2 different use cases for how a scheduling
> policy would be chosen.
> 
> #1 - policy associated with a host aggregate
> 
> This seems very odd to me.  Scheduling policy is what chooses hosts, so
> having a subset of hosts specify which policy to use seems backwards.

This is not what we had in mind. Host aggregate is selected based on 
policy passed in the request (hint, extra spec, or whatever -- see below) 
and 'policy' attribute of the aggregate -- possibly in conjunction with 
'regular' aggregate filtering. And not the other way around. Maybe the 
design document is not clear enough about this point.

> #2 - via a scheduler hint
> 
> It also seems odd to have the user specifying scheduling policy.  This
> seems like something that should be completely hidden from the user.
> 
> How about just making the scheduling policy choice as simple as an item
> in the flavor extra specs?

This is certainly an option. It would be just another implementation of 
the policy selection interface (implemented using filters). In fact, we 
already have it implemented -- just thought that explicit hint could be 
more straightforward to start with. Will include the implementation based 
on flavor extra spec in the next commit.

> The design also shows some example configuration.  It shows a global set
> of enabled scheduler filters, and then policy specific tweaks of filter
> config (CPU allocation ratio in the example).  I would expect to be able
> to set a scheduling policy specific list of scheduler filters and
> weights, as well.

This is certainly supported. Just didn't want to complicate the example 
too much. It could be even a different driver, assuming that the driver 
complies with the 'policy' attribute of the aggregates -- which is 
achieved by PolicyFilter in FilterScheduler. We plan to make other drivers 
'policy-aware' in a future patch, leveraging the new db method that 
returns hosts belonging to aggregates with compatible policies.

Hope this clarifies the concerns.

Regards,
Alex

> -- 
> Russell Bryant
> 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-07-23 Thread Russell Bryant
On 07/23/2013 12:24 AM, Alex Glikson wrote:
> Russell Bryant  wrote on 23/07/2013 01:04:24 AM:
>> > [1]
> https://blueprints.launchpad.net/nova/+spec/multiple-scheduler-drivers
>> > [2] https://wiki.openstack.org/wiki/Nova/MultipleSchedulerPolicies
>> > [3] https://review.openstack.org/#/c/37407/
>>
>> Thanks for bringing this up.  I do have some comments.
> 
> Thanks for the comments. See below.
> 
>>
>> The current design shows 2 different use cases for how a scheduling
>> policy would be chosen.
>>
>> #1 - policy associated with a host aggregate
>>
>> This seems very odd to me.  Scheduling policy is what chooses hosts, so
>> having a subset of hosts specify which policy to use seems backwards.
> 
> This is not what we had in mind. Host aggregate is selected based on
> policy passed in the request (hint, extra spec, or whatever -- see
> below) and 'policy' attribute of the aggregate -- possibly in
> conjunction with 'regular' aggregate filtering. And not the other way
> around. Maybe the design document is not clear enough about this point.

Then I don't understand what this adds over the existing ability to
specify an aggregate using extra_specs.

>> #2 - via a scheduler hint
>>
>> It also seems odd to have the user specifying scheduling policy.  This
>> seems like something that should be completely hidden from the user.
>>
>> How about just making the scheduling policy choice as simple as an item
>> in the flavor extra specs?
> 
> This is certainly an option. It would be just another implementation of
> the policy selection interface (implemented using filters). In fact, we
> already have it implemented -- just thought that explicit hint could be
> more straightforward to start with. Will include the implementation
> based on flavor extra spec in the next commit.

Ok.  I'd actually prefer to remove the scheduler hint support
completely.  I'm not even sure it makes sense to make this pluggable.  I
can't think of why something other than flavor extra specs is necessary
and justifies the additional complexity.

>> The design also shows some example configuration.  It shows a global set
>> of enabled scheduler filters, and then policy specific tweaks of filter
>> config (CPU allocation ratio in the example).  I would expect to be able
>> to set a scheduling policy specific list of scheduler filters and
>> weights, as well.
> 
> This is certainly supported. Just didn't want to complicate the example
> too much. It could be even a different driver, assuming that the driver
> complies with the 'policy' attribute of the aggregates -- which is
> achieved by PolicyFilter in FilterScheduler. We plan to make other
> drivers 'policy-aware' in a future patch, leveraging the new db method
> that returns hosts belonging to aggregates with compatible policies.

I think some additional examples would help.  It's also important to
have this laid out for documentation purposes.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-07-23 Thread Alex Glikson
Russell Bryant  wrote on 23/07/2013 05:35:18 PM:
> >> #1 - policy associated with a host aggregate
> >>
> >> This seems very odd to me.  Scheduling policy is what chooses hosts, 
so
> >> having a subset of hosts specify which policy to use seems backwards.
> > 
> > This is not what we had in mind. Host aggregate is selected based on
> > policy passed in the request (hint, extra spec, or whatever -- see
> > below) and 'policy' attribute of the aggregate -- possibly in
> > conjunction with 'regular' aggregate filtering. And not the other way
> > around. Maybe the design document is not clear enough about this 
point.
> 
> Then I don't understand what this adds over the existing ability to
> specify an aggregate using extra_specs.

The added value is in the ability to configure the scheduler accordingly 
-- potentially differently for different aggregates -- in addition to just 
restricting the target host to those belonging to an aggregate with 
certain properties. For example, let's say we want to support two classes 
of workloads - CPU-intensive, and memory-intensive. The administrator may 
decide to use 2 different hardware models, and configure one aggregate 
with lots of CPU, and another aggregate with lots of memory. In addition 
to just routing an incoming provisioning request to the correct aggregate 
(which can be done already), we may want different cpu_allocation_ratio 
and memory_allocation_ratio when managing resources in each of the 
aggregates. In order to support this, we would define 2 policies (with 
corresponding configuration of filters), and attach each one to the 
corresponding aggregate.

> 
> >> #2 - via a scheduler hint
> >> How about just making the scheduling policy choice as simple as an 
item
> >> in the flavor extra specs?
> > 
> > This is certainly an option. It would be just another implementation 
of
> > the policy selection interface (implemented using filters). In fact, 
we
> > already have it implemented -- just thought that explicit hint could 
be
> > more straightforward to start with. Will include the implementation
> > based on flavor extra spec in the next commit.
> 
> Ok.  I'd actually prefer to remove the scheduler hint support
> completely. 

OK, removing the support for doing it via hint is easy :-)

> I'm not even sure it makes sense to make this pluggable.  I
> can't think of why something other than flavor extra specs is necessary
> and justifies the additional complexity.

Well, I can think of few use-cases when the selection approach might be 
different. For example, it could be based on tenant properties (derived 
from some kind of SLA associated with the tenant, determining the 
over-commit levels), or image properties (e.g., I want to determine 
placement of Windows instances taking into account Windows licensing 
considerations), etc

> I think some additional examples would help.  It's also important to
> have this laid out for documentation purposes.

OK, sure, will add more. Hopefully few examples above are also helpful to 
clarify the intention/design.

Regards,
Alex

> -- 
> Russell Bryant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-07-23 Thread Russell Bryant
On 07/23/2013 12:02 PM, Alex Glikson wrote:
> Russell Bryant  wrote on 23/07/2013 05:35:18 PM:
>> >> #1 - policy associated with a host aggregate
>> >>
>> >> This seems very odd to me.  Scheduling policy is what chooses hosts, so
>> >> having a subset of hosts specify which policy to use seems backwards.
>> >
>> > This is not what we had in mind. Host aggregate is selected based on
>> > policy passed in the request (hint, extra spec, or whatever -- see
>> > below) and 'policy' attribute of the aggregate -- possibly in
>> > conjunction with 'regular' aggregate filtering. And not the other way
>> > around. Maybe the design document is not clear enough about this point.
>>
>> Then I don't understand what this adds over the existing ability to
>> specify an aggregate using extra_specs.
> 
> The added value is in the ability to configure the scheduler accordingly
> -- potentially differently for different aggregates -- in addition to
> just restricting the target host to those belonging to an aggregate with
> certain properties. For example, let's say we want to support two
> classes of workloads - CPU-intensive, and memory-intensive. The
> administrator may decide to use 2 different hardware models, and
> configure one aggregate with lots of CPU, and another aggregate with
> lots of memory. In addition to just routing an incoming provisioning
> request to the correct aggregate (which can be done already), we may
> want different cpu_allocation_ratio and memory_allocation_ratio when
> managing resources in each of the aggregates. In order to support this,
> we would define 2 policies (with corresponding configuration of
> filters), and attach each one to the corresponding aggregate.

I understand the use case, but can't it just be achieved with 2 flavors
and without this new aggreagte-policy mapping?

flavor 1 with extra specs to say aggregate A and policy Y
flavor 2 with extra specs to say aggregate B and policy Z

>>
>> >> #2 - via a scheduler hint
>> >> How about just making the scheduling policy choice as simple as an item
>> >> in the flavor extra specs?
>> >
>> > This is certainly an option. It would be just another implementation of
>> > the policy selection interface (implemented using filters). In fact, we
>> > already have it implemented -- just thought that explicit hint could be
>> > more straightforward to start with. Will include the implementation
>> > based on flavor extra spec in the next commit.
>>
>> Ok.  I'd actually prefer to remove the scheduler hint support
>> completely.
> 
> OK, removing the support for doing it via hint is easy :-)
> 
>> I'm not even sure it makes sense to make this pluggable.  I
>> can't think of why something other than flavor extra specs is necessary
>> and justifies the additional complexity.
> 
> Well, I can think of few use-cases when the selection approach might be
> different. For example, it could be based on tenant properties (derived
> from some kind of SLA associated with the tenant, determining the
> over-commit levels), or image properties (e.g., I want to determine
> placement of Windows instances taking into account Windows licensing
> considerations), etc

Well, you can define tenant specific flavors that could have different
policy configurations.

I think I'd rather hold off on the extra complexity until there is a
concrete implementation of something that requires and justifies it.

>> I think some additional examples would help.  It's also important to
>> have this laid out for documentation purposes.
> 
> OK, sure, will add more. Hopefully few examples above are also helpful
> to clarify the intention/design.


-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-07-23 Thread Alex Glikson
Russell Bryant  wrote on 23/07/2013 07:19:48 PM:

> I understand the use case, but can't it just be achieved with 2 flavors
> and without this new aggreagte-policy mapping?
> 
> flavor 1 with extra specs to say aggregate A and policy Y
> flavor 2 with extra specs to say aggregate B and policy Z

I agree that this approach is simpler to implement. One of the differences 
is the level of enforcement that instances within an aggregate are managed 
under the same policy. For example, nothing would prevent the admin to 
define 2 flavors with conflicting policies that can be applied to the same 
aggregate. Another aspect of the same problem is the case when admin wants 
to apply 2 different policies in 2 aggregates with same 
capabilities/properties. A natural way to distinguish between the two 
would be to add an artificial property that would be different between the 
two -- but then just specifying the policy would make most sense.

> > Well, I can think of few use-cases when the selection approach might 
be
> > different. For example, it could be based on tenant properties 
(derived
> > from some kind of SLA associated with the tenant, determining the
> > over-commit levels), or image properties (e.g., I want to determine
> > placement of Windows instances taking into account Windows licensing
> > considerations), etc
> 
> Well, you can define tenant specific flavors that could have different
> policy configurations.

Would it possible to express something like 'I want CPU over-commit of 2.0 
for tenants with SLA=GOLD, and 4.0 for tenants with SLA=SILVER'?

> I think I'd rather hold off on the extra complexity until there is a
> concrete implementation of something that requires and justifies it.

The extra complexity is actually not that huge.. we reuse the existing 
mechanism of generic filters.

Regarding both suggestions -- I think the value of this blueprint will be 
somewhat limited if we keep just the simplest version. But if people think 
that it makes a lot of sense to do it in small increments -- we can 
probably split the patch into smaller pieces.

Regards,
Alex

> -- 
> Russell Bryant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-07-23 Thread Russell Bryant
On 07/23/2013 04:24 PM, Alex Glikson wrote:
> Russell Bryant  wrote on 23/07/2013 07:19:48 PM:
> 
>> I understand the use case, but can't it just be achieved with 2 flavors
>> and without this new aggreagte-policy mapping?
>>
>> flavor 1 with extra specs to say aggregate A and policy Y
>> flavor 2 with extra specs to say aggregate B and policy Z
> 
> I agree that this approach is simpler to implement. One of the
> differences is the level of enforcement that instances within an
> aggregate are managed under the same policy. For example, nothing would
> prevent the admin to define 2 flavors with conflicting policies that can
> be applied to the same aggregate. Another aspect of the same problem is
> the case when admin wants to apply 2 different policies in 2 aggregates
> with same capabilities/properties. A natural way to distinguish between
> the two would be to add an artificial property that would be different
> between the two -- but then just specifying the policy would make most
> sense.

I'm not sure I understand this.  I don't see anything here that couldn't
be accomplished with flavor extra specs.  Is that what you're saying?
Or are you saying there are cases that can not be set up using that
approach?

>> > Well, I can think of few use-cases when the selection approach might be
>> > different. For example, it could be based on tenant properties (derived
>> > from some kind of SLA associated with the tenant, determining the
>> > over-commit levels), or image properties (e.g., I want to determine
>> > placement of Windows instances taking into account Windows licensing
>> > considerations), etc
>>
>> Well, you can define tenant specific flavors that could have different
>> policy configurations.
> 
> Would it possible to express something like 'I want CPU over-commit of
> 2.0 for tenants with SLA=GOLD, and 4.0 for tenants with SLA=SILVER'?

Sure.  Define policies for sla=gold and sla=silver, and the flavors for
each tenant would refer to those policies.

>> I think I'd rather hold off on the extra complexity until there is a
>> concrete implementation of something that requires and justifies it.
> 
> The extra complexity is actually not that huge.. we reuse the existing
> mechanism of generic filters.

I just want to see something that actually requires it before it goes
in.  I take exposing a pluggable interface very seriously.  I don't want
to expose more random plug points than necessary.

> Regarding both suggestions -- I think the value of this blueprint will
> be somewhat limited if we keep just the simplest version. But if people
> think that it makes a lot of sense to do it in small increments -- we
> can probably split the patch into smaller pieces.

I'm certainly not trying to diminish value, but I am looking for
specific cases that can not be accomplished with a simpler solution.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-07-24 Thread Day, Phil
Hi Alex,

I'm inclined to agree with others that I'm not sure you need the complexity 
that this BP brings to the system.If you want to provide a user with a 
choice about how much overcommit they will be exposed to then doing that in 
flavours and the aggregate_instance_extra_spec filter seems the more natural 
way to do this, since presumably you'd want to charge differently for those and 
the flavour list is normally what is linked to the pricing model.  

I also like the approach taken by the recent changes to the ram filter where 
the scheduling characteristics are defined as properties of the aggregate 
rather than separate stanzas in the configuration file.

An alternative, and the use case I'm most interested in at the moment, is where 
we want the user to be able to define the scheduling policies on a specific set 
of hosts allocated to them (in this case they pay for the host, so if they want 
to oversubscribe on memory/cpu/disk then they should be able to).  The basic 
framework for this is described in this BP 
https://blueprints.launchpad.net/nova/+spec/whole-host-allocation and the 
corresponding wiki page (https://wiki.openstack.org/wiki/WholeHostAllocation.   
 I've also recently posted code for the basic framework built as a wrapper 
around aggregates (https://review.openstack.org/#/c/38156/, 
https://review.openstack.org/#/c/38158/ ) which you might want to take a look 
at.
 
Its not clear to me if what your proposing addresses an additional gap between 
this and the combination of the aggregate_extra_spec filter + revised filters 
to get their configurations from aggregates) ?

Cheers,
Phil

> -Original Message-
> From: Russell Bryant [mailto:rbry...@redhat.com]
> Sent: 23 July 2013 22:32
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Nova] support for multiple active scheduler
> policies/drivers
> 
> On 07/23/2013 04:24 PM, Alex Glikson wrote:
> > Russell Bryant  wrote on 23/07/2013 07:19:48 PM:
> >
> >> I understand the use case, but can't it just be achieved with 2
> >> flavors and without this new aggreagte-policy mapping?
> >>
> >> flavor 1 with extra specs to say aggregate A and policy Y flavor 2
> >> with extra specs to say aggregate B and policy Z
> >
> > I agree that this approach is simpler to implement. One of the
> > differences is the level of enforcement that instances within an
> > aggregate are managed under the same policy. For example, nothing
> > would prevent the admin to define 2 flavors with conflicting policies
> > that can be applied to the same aggregate. Another aspect of the same
> > problem is the case when admin wants to apply 2 different policies in
> > 2 aggregates with same capabilities/properties. A natural way to
> > distinguish between the two would be to add an artificial property
> > that would be different between the two -- but then just specifying
> > the policy would make most sense.
> 
> I'm not sure I understand this.  I don't see anything here that couldn't be
> accomplished with flavor extra specs.  Is that what you're saying?
> Or are you saying there are cases that can not be set up using that approach?
> 
> >> > Well, I can think of few use-cases when the selection approach
> >> > might be different. For example, it could be based on tenant
> >> > properties (derived from some kind of SLA associated with the
> >> > tenant, determining the over-commit levels), or image properties
> >> > (e.g., I want to determine placement of Windows instances taking
> >> > into account Windows licensing considerations), etc
> >>
> >> Well, you can define tenant specific flavors that could have
> >> different policy configurations.
> >
> > Would it possible to express something like 'I want CPU over-commit of
> > 2.0 for tenants with SLA=GOLD, and 4.0 for tenants with SLA=SILVER'?
> 
> Sure.  Define policies for sla=gold and sla=silver, and the flavors for each
> tenant would refer to those policies.
> 
> >> I think I'd rather hold off on the extra complexity until there is a
> >> concrete implementation of something that requires and justifies it.
> >
> > The extra complexity is actually not that huge.. we reuse the existing
> > mechanism of generic filters.
> 
> I just want to see something that actually requires it before it goes in.  I 
> take
> exposing a pluggable interface very seriously.  I don't want to expose more
> random plug points than necessary.
> 
> > Regarding both suggestions -- I think the value of this blueprint will
> > be somewhat limited if we keep just the simplest version. B

Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-07-24 Thread Russell Bryant
On 07/24/2013 05:39 AM, Day, Phil wrote:
> Hi Alex,
> 
> I'm inclined to agree with others that I'm not sure you need the complexity 
> that this BP brings to the system.If you want to provide a user with a 
> choice about how much overcommit they will be exposed to then doing that in 
> flavours and the aggregate_instance_extra_spec filter seems the more natural 
> way to do this, since presumably you'd want to charge differently for those 
> and the flavour list is normally what is linked to the pricing model.  
> 
> I also like the approach taken by the recent changes to the ram filter where 
> the scheduling characteristics are defined as properties of the aggregate 
> rather than separate stanzas in the configuration file.
> 
> An alternative, and the use case I'm most interested in at the moment, is 
> where we want the user to be able to define the scheduling policies on a 
> specific set of hosts allocated to them (in this case they pay for the host, 
> so if they want to oversubscribe on memory/cpu/disk then they should be able 
> to).  The basic framework for this is described in this BP 
> https://blueprints.launchpad.net/nova/+spec/whole-host-allocation and the 
> corresponding wiki page (https://wiki.openstack.org/wiki/WholeHostAllocation. 
>I've also recently posted code for the basic framework built as a wrapper 
> around aggregates (https://review.openstack.org/#/c/38156/, 
> https://review.openstack.org/#/c/38158/ ) which you might want to take a look 
> at.
>  
> Its not clear to me if what your proposing addresses an additional gap 
> between this and the combination of the aggregate_extra_spec filter + revised 
> filters to get their configurations from aggregates) ?

I really like your point about not needing to set things up via a config
file.  That's fairly limiting since you can't change it on the fly via
the API.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-07-24 Thread Alex Glikson
"Day, Phil"  wrote on 24/07/2013 12:39:16 PM:
> 
> If you want to provide a user with a choice about how much overcommit
> they will be exposed to then doing that in flavours and the 
> aggregate_instance_extra_spec filter seems the more natural way to 
> do this, since presumably you'd want to charge differently for those
> and the flavour list is normally what is linked to the pricing model. 

So, there are 2 aspects here. First, whether policy should be part of 
flavor definition or separate. I claim that in some cases it would make 
sense to specify it separately. For example, if we want to support 
multiple policies for the same virtual hardware configuration, making 
policy to be part of the flavor extra spec would potentially multiply the 
number of virtual hardware configurations, which is what flavors 
essentially are, by the number of policies -- contributing to explosion in 
the number of flavors in the system. Moreover, although in some cases you 
would want the user to be aware and distinguish between policies, this is 
not always the case. For example, the admin may want to apply 
consolidation/packing policy in one aggregate, and spreading in another. 
Showing two different flavors does seem reasonable in such cases. 

Secondly, even if the policy *is* defined in flavor extra spec, I can see 
value in having a separate filter to handle it. I personally see the main 
use-case for the extra spec filter in supporting matching of capabilities. 
Resource management policy is something which should be hidden, or at 
least abstracted, from the user. And enforcing it with a separate filter 
could be a 'cleaner' design, and also more convenient -- both from 
developer perspective and admin perspective.

> I also like the approach taken by the recent changes to the ram 
> filter where the scheduling characteristics are defined as 
> properties of the aggregate rather than separate stanzas in the 
> configuration file.

Indeed, subset of the scenarios we had in mind can be implemented by 
making each property of each filter/weight an explicit key-value of the 
aggregate, and making each of the filters/weights aware of those aggregate 
properties.
However, our design have several potential advantages, such as:
1) different policies can have different sets of filters/weights
2) different policies can be even enforced by different drivers
3) the configuration is more maintainable -- the admin defines policies in 
one place, and not in 10 places (if you have large environment with 10 
aggregates). One of the side-effects is improved consistency -- if the 
admin needs to change a policy, he needs to do it in one place, and he can 
be sure that all the aggregates comply to one of the valid policies. 
4) the developer of filters/weights does need to care whether the 
parameters are persisted -- nova.conf or aggregate properties

> An alternative, and the use case I'm most interested in at the 
> moment, is where we want the user to be able to define the 
> scheduling policies on a specific set of hosts allocated to them (in
> this case they pay for the host, so if they want to oversubscribe on
> memory/cpu/disk then they should be able to). 
[...]
> Its not clear to me if what your proposing addresses an additional 
> gap between this and the combination of the aggregate_extra_spec 
> filter + revised filters to get their configurations from aggregates) ?

IMO, this can be done with our proposed implementation. 
Going forward, I think that policies should be first-class citizens 
(rather than static sections in nova.conf, or just sets of key-value pairs 
associated with aggregates). Then we can provide APIs to manage them in a 
more flexible manner.

Regards,
Alex

> Cheers,
> Phil
> 
> > -Original Message-
> > From: Russell Bryant [mailto:rbry...@redhat.com]
> > Sent: 23 July 2013 22:32
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [Nova] support for multiple active 
scheduler
> > policies/drivers
> > 
> > On 07/23/2013 04:24 PM, Alex Glikson wrote:
> > > Russell Bryant  wrote on 23/07/2013 07:19:48 PM:
> > >
> > >> I understand the use case, but can't it just be achieved with 2
> > >> flavors and without this new aggreagte-policy mapping?
> > >>
> > >> flavor 1 with extra specs to say aggregate A and policy Y flavor 2
> > >> with extra specs to say aggregate B and policy Z
> > >
> > > I agree that this approach is simpler to implement. One of the
> > > differences is the level of enforcement that instances within an
> > > aggregate are managed under the same policy. For example, nothing
> > > would prevent the admin to define 2 flavors with conflicting 
policies
> > > that can be applied to t

Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-07-24 Thread Alex Glikson
Russell Bryant  wrote on 24/07/2013 07:14:27 PM:
> 
> I really like your point about not needing to set things up via a config
> file.  That's fairly limiting since you can't change it on the fly via
> the API.

True. As I pointed out in another response, the ultimate goal would be to 
have policies as 'first class citizens' in Nova, including a DB table, 
API, etc. Maybe even a separate policy service? But in the meantime, it 
seems that the approach with config file is a reasonable compromise in 
terms of usability, consistency and simplicity.

Regards,
Alex

> -- 
> Russell Bryant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-07-26 Thread Joe Gordon
On Wed, Jul 24, 2013 at 6:18 PM, Alex Glikson  wrote:

> Russell Bryant  wrote on 24/07/2013 07:14:27 PM:
>
> >
> > I really like your point about not needing to set things up via a config
> > file.  That's fairly limiting since you can't change it on the fly via
> > the API.
>
>
> True. As I pointed out in another response, the ultimate goal would be to
> have policies as 'first class citizens' in Nova, including a DB table, API,
> etc. Maybe even a separate policy service? But in the meantime, it seems
> that the approach with config file is a reasonable compromise in terms of
> usability, consistency and simplicity.
>

I do like your idea of making policies first class citizens in Nova, but I
am not sure doing this in nova is enough.  Wouldn't we need similar things
in Cinder and Neutron?Unfortunately this does tie into how to do good
scheduling across multiple services, which is another rabbit hole all
together.

I don't like the idea of putting more logic in the config file, as it is
the config files are already too complex, making running any OpenStack
deployment  require some config file templating and some metadata magic
(like heat).   I would prefer to keep things like this in aggregates, or
something else with a REST API.  So why not build a tool on top of
aggregates to push the appropriate metadata into the aggregates.  This will
give you a central point to manage policies, that can easily be updated on
the fly (unlike config files).   In the long run I am interested in seeing
OpenStack itself have a strong solution for for policies as a first class
citizen, but I am not sure if your proposal is the best first step to do
that.




>
> Regards,
> Alex
>
> > --
> > Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-07-28 Thread Day, Phil


> From: Joe Gordon [mailto:joe.gord...@gmail.com] 
> Sent: 26 July 2013 23:16
> To: OpenStack Development Mailing List
> Subject: Re: [openstack-dev] [Nova] support for multiple active scheduler 
> policies/drivers
>
>
>>
>> On Wed, Jul 24, 2013 at 6:18 PM, Alex Glikson  wrote:
>>> Russell Bryant  wrote on 24/07/2013 07:14:27 PM:
>>
>> 
>>> I really like your point about not needing to set things up via a config
>>> file.  That's fairly limiting since you can't change it on the fly via
>>> the API.

>>True. As I pointed out in another response, the ultimate goal would be to 
>>have policies as 'first class citizens' in Nova, including a DB table, API, 
>>>>etc. Maybe even a separate policy service? But in the meantime, it seems 
>>that the approach with config file is a reasonable compromise in >>terms of 
>>usability, consistency and simplicity. 

I think we need to be looking in the future to being able to delegate large 
parts of the functionality that is currently "admin only" in Nova, and a large 
part of that is moving things like this from the config file into APIs.   Once 
we have the Domain capability in ketystone fully available to services like 
Nova we  need to think more about ownership of resources like hosts, and being 
able to delegate this kind of capability.


>I do like your idea of making policies first class citizens in Nova, but I am 
>not sure doing this in nova is enough.  Wouldn't we need similar things >in 
>Cinder and Neutron?    Unfortunately this does tie into how to do good 
>scheduling across multiple services, which is another rabbit hole all 
>>together.
>
> I don't like the idea of putting more logic in the config file, as it is the 
> config files are already too complex, making running any OpenStack 
> >deployment  require some config file templating and some metadata magic 
> (like heat).   I would prefer to keep things like this in aggregates, or 
> >something else with a REST API.  So why not build a tool on top of 
> aggregates to push the appropriate metadata into the aggregates.  This will 
> >give you a central point to manage policies, that can easily be updated on 
> the fly (unlike config files).  

I agree with Jo on this point, and his is the approach we're taking with the 
Pcloud / whole-host-allocation blueprint:

https://review.openstack.org/#/c/38156/
https://wiki.openstack.org/wiki/WholeHostAllocation

I don't think realistically we'll be able to land this in Havana now (as much 
as anything I don't think it had enough air time yet to be sure we have a 
consensus on all of the details) but Rackspace are now helping with part of 
this and we do expect to have something in a PoC / Demonstratable state for the 
Design Summit to provide a more focused discussion.  Because the code is 
layered on top of existing aggregate and scheduler features its pretty easy to 
keep it as something we can just keep rebasing.

Regards,
Phil


 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-07-29 Thread Alex Glikson
It is certainly an interesting idea to have a policy service managed via 
APIs, and to have scheduler as a potential consumer of such as service. 
However, I suspect that this requires more discussion, and certainly can't 
be added for Havana (you can count on me to suggest it as a topic for the 
upcoming design summit).

Moreover, I think the currently proposed implementation (incorporating 
some of the initial feedback provided in this thread) introduces 80% of 
the value, with 20% of the effort and complexity.

If anyone has specific suggestions on how to make it better without adding 
another 1000 lines of code -- I would be more than glad to adjust.

IMO, it is better to start simple in Havana, start getting feedback from 
the field regarding specific usability/feature requirements earlier rather 
than later, and incrementally improve going forward. The current design 
provides clear added value, while not introducing anything that would be 
conceptually difficult to change in the future (e.g., no new APIs, no 
schema changes, fully backwards compatible).

By the way, the inspiration for the current design was the multi-backend 
support in Cinder, where a similar approach is used to define multiple 
Cinder backends in cinder.conf, and to use a simple logic to select the 
appropriate one at runtime base on the name of the corresponding section.

Regards,
Alex

P.S. the code is ready for review.. Jenkins is still failing, but this 
seems to be due to a bug which has been reported, fixed and will be merged 
soon.


"Day, Phil"  wrote on 28/07/2013 01:29:22 PM:

> From: "Day, Phil" 
> To: OpenStack Development Mailing List 
, 
> Date: 28/07/2013 01:36 PM
> Subject: Re: [openstack-dev] [Nova] support for multiple active 
> scheduler policies/drivers
> 
> 
> 
> > From: Joe Gordon [mailto:joe.gord...@gmail.com] 
> > Sent: 26 July 2013 23:16
> > To: OpenStack Development Mailing List
> > Subject: Re: [openstack-dev] [Nova] support for multiple active 
> scheduler policies/drivers
> >
> >
> >>
> >> On Wed, Jul 24, 2013 at 6:18 PM, Alex Glikson  
wrote:
> >>> Russell Bryant  wrote on 24/07/2013 07:14:27 PM:
> >>
> >> 
> >>> I really like your point about not needing to set things up via a 
config
> >>> file.  That's fairly limiting since you can't change it on the fly 
via
> >>> the API.
> 
> >>True. As I pointed out in another response, the ultimate goal 
> would be to have policies as 'first class citizens' in Nova, 
> including a DB table, API, >>etc. Maybe even a separate policy 
> service? But in the meantime, it seems that the approach with config
> file is a reasonable compromise in >>terms of usability, consistency
> and simplicity. 
> 
> I think we need to be looking in the future to being able to 
> delegate large parts of the functionality that is currently "admin 
> only" in Nova, and a large part of that is moving things like this 
> from the config file into APIs.   Once we have the Domain capability
> in ketystone fully available to services like Nova we  need to think
> more about ownership of resources like hosts, and being able to 
> delegate this kind of capability.
> 
> 
> >I do like your idea of making policies first class citizens in 
> Nova, but I am not sure doing this in nova is enough.  Wouldn't we 
> need similar things >in Cinder and Neutron?Unfortunately this 
> does tie into how to do good scheduling across multiple services, 
> which is another rabbit hole all >together.
> >
> > I don't like the idea of putting more logic in the config file, as
> it is the config files are already too complex, making running any 
> OpenStack >deployment  require some config file templating and some 
> metadata magic (like heat).   I would prefer to keep things like 
> this in aggregates, or >something else with a REST API.  So why not 
> build a tool on top of aggregates to push the appropriate metadata 
> into the aggregates.  This will >give you a central point to manage 
> policies, that can easily be updated on the fly (unlike config files).  
> 
> I agree with Jo on this point, and his is the approach we're taking 
> with the Pcloud / whole-host-allocation blueprint:
> 
> https://review.openstack.org/#/c/38156/
> https://wiki.openstack.org/wiki/WholeHostAllocation
> 
> I don't think realistically we'll be able to land this in Havana now
> (as much as anything I don't think it had enough air time yet to be 
> sure we have a consensus on all of the details) but Rackspace are 
> now helping with part of this and we do expect to have something in 
> a PoC / Demonstratable state for the Design 

Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-08-21 Thread Alex Glikson
Just to update those who are interested in this feature but were not able 
to follow the recent commits, we made good progress converging towards a 
simplified design, based on combination of aggregates and flavors (both of 
which are API-drvien), addressing some of the concerns expressed in this 
thread (at least to certain extent).
The current design and possible usage scenario has been updated at 
https://wiki.openstack.org/wiki/Nova/MultipleSchedulerPolicies 
Comments are welcome (as well as code reviews at 
https://review.openstack.org/#/c/37407/).

Thanks, 
Alex




From:   Joe Gordon 
To: OpenStack Development Mailing List 
, 
Date:   27/07/2013 01:22 AM
Subject:Re: [openstack-dev] [Nova] support for multiple active 
scheduler   policies/drivers






On Wed, Jul 24, 2013 at 6:18 PM, Alex Glikson  wrote:
Russell Bryant  wrote on 24/07/2013 07:14:27 PM:

> 
> I really like your point about not needing to set things up via a config
> file.  That's fairly limiting since you can't change it on the fly via
> the API.


True. As I pointed out in another response, the ultimate goal would be to 
have policies as 'first class citizens' in Nova, including a DB table, 
API, etc. Maybe even a separate policy service? But in the meantime, it 
seems that the approach with config file is a reasonable compromise in 
terms of usability, consistency and simplicity. 

I do like your idea of making policies first class citizens in Nova, but I 
am not sure doing this in nova is enough.  Wouldn't we need similar things 
in Cinder and Neutron?Unfortunately this does tie into how to do good 
scheduling across multiple services, which is another rabbit hole all 
together.

I don't like the idea of putting more logic in the config file, as it is 
the config files are already too complex, making running any OpenStack 
deployment  require some config file templating and some metadata magic 
(like heat).   I would prefer to keep things like this in aggregates, or 
something else with a REST API.  So why not build a tool on top of 
aggregates to push the appropriate metadata into the aggregates.  This 
will give you a central point to manage policies, that can easily be 
updated on the fly (unlike config files).   In the long run I am 
interested in seeing OpenStack itself have a strong solution for for 
policies as a first class citizen, but I am not sure if your proposal is 
the best first step to do that.


 

Regards, 
Alex 

> -- 
> Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev