From: Scott Devoid [mailto:dev...@anl.gov]
Sent: 04 June 2014 17:36
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation 
ratio out of scheduler

Not only live upgrades but also dynamic reconfiguration.

Overcommitting affects the quality of service delivered to the cloud user.  In 
this situation in particular, as in many situations in general, I think we want 
to enable the service provider to offer multiple qualities of service.  That 
is, enable the cloud provider to offer a selectable level of overcommit.  A 
given instance would be placed in a pool that is dedicated to the relevant 
level of overcommit (or, possibly, a better pool if the selected one is 
currently full).  Ideally the pool sizes would be dynamic.  That's the dynamic 
reconfiguration I mentioned preparing for.

+1 This is exactly the situation I'm in as an operator. You can do different 
levels of overcommit with host-aggregates and different flavors, but this has 
several drawbacks:

  1.  The nature of this is slightly exposed to the end-user, through 
extra-specs and the fact that two flavors cannot have the same name. One 
scenario we have is that we want to be able to document our flavor names--what 
each name means, but we want to provide different QoS standards for different 
projects. Since flavor names must be unique, we have to create different 
flavors for different levels of service. Sometimes you do want to lie to your 
users!
[Day, Phil] BTW you might be able to (nearly) do this already if you define 
aggregates for the two QoS pools, and limit which projects can be scheduled 
into those pools using the AggregateMultiTenancyIsolation filter.    I say 
nearly because as pointed out by this spec that filter currently only excludes 
tenants from each aggregate – it doesn’t actually constrain them to only be in 
a specific aggregate.


  1.  If I have two pools of nova-compute HVs with different overcommit 
settings, I have to manage the pool sizes manually. Even if I use puppet to 
change the config and flip an instance into a different pool, that requires me 
to restart nova-compute. Not an ideal situation.
  2.  If I want to do anything complicated, like 3 overcommit tiers with 
"good", "better", "best" performance and allow the scheduler to pick "better" 
for a "good" instance if the "good" pool is full, this is very hard and 
complicated to do with the current system.

I'm looking forward to seeing this in nova-specs!
~ Scott
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to