FWIW - There was a lengthy discussion in #openstack-dev yesterday regarding this [0].
[0] http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2017-02-28.log.html#t2017-02-28T17:39:48 On Wed, Mar 1, 2017 at 5:42 AM, John Garbutt <j...@johngarbutt.com> wrote: > On 27 February 2017 at 21:18, Matt Riedemann <mriede...@gmail.com> wrote: > > We talked about a few things related to quotas at the PTG, some in > > cross-project sessions earlier in the week and then some on Wednesday > > morning in the Nova room. The full etherpad is here [1]. > > > > Counting quotas > > --------------- > > > > Melanie hit a problem with the counting quotas work in Ocata with > respect to > > how to handle quotas when the cell that an instance is running in is > down. > > The proposed solution is to track project/user ID information in the > > "allocations" table in the Placement service so that we can get > allocation > > information for quota usage from Placement rather than the cell. That > should > > be a relatively simple change to move this forward and hopefully get the > > counting quotas patches merged by p-1 so we have plenty of burn-in time > for > > the new quotas code. > > > > Centralizing limits in Keystone > > ------------------------------- > > > > This actually came up mostly during the hierarchical quotas discussion on > > Tuesday which was a cross-project session. The etherpad for that is here > > [2]. The idea here is that Keystone already knows about the project > > hierarchy and can be a central location for resource limits so that the > > various projects, like nova and cinder, don't have to have a similar data > > model and API for limits, we can just make that common in Keystone. The > > other projects would still track resource usage and calculate when a > request > > is over the limit, but the hope is that the calculation and enforcement > can > > be generalized so we don't have to implement the same thing in all of the > > projects for calculating when something is over quota. > > > > There is quite a bit of detail in the nova etherpad [1] about overbooking > > and enforcement modes, which will need to be brought up as options in a > spec > > and then projects can sort out what makes the most sense (there might be > > multiple enforcement models available). > > > > We still have to figure out the data migration plan to get limits data > from > > each project into Keystone, and what the API in Keystone is going to look > > like, including what this looks like when you have multiple compute > > endpoints in the service catalog, or regions, for example. > > > > Sean Dague was going to start working on the spec for this. > > > > Hierarchical quota support > > -------------------------- > > > > The notes on hierarchical quota support are already in [1] and [2]. We > > agreed to not try and support hierarchical quotas in Nova until we were > > using limits from Keystone so that we can avoid the complexity of both > > systems (limits from Nova and limits from Keystone) in the same API > code. We > > also agreed to not block the counting quotas work that melwitt is doing > > since that's already valuable on its own. It's also fair to say that > > hierarchical quota support in Nova is a Queens item at the earliest > given we > > have to get limits stored in Keystone in Pike first. > > > > Dealing with the os-qouta-class-sets API > > ---------------------------------------- > > > > I had a spec [3] proposing to cleanup some issues with the > > os-quota-class-sets API in Nova. We agreed that rather than spend time > > fixing the latent issues in that API, we'd just invest that time in > storing > > and getting limits from Keystone, after which we'll revisit deprecating > the > > quota classes API in Nova. > > > > [1] https://etherpad.openstack.org/p/nova-ptg-pike-quotas > > [2] https://etherpad.openstack.org/p/ptg-hierarchical-quotas > > [3] https://review.openstack.org/#/c/411035/ > > I started a quota backlog spec before the PTG to collect my thoughts here: > https://review.openstack.org/#/c/429678 > > I have updated that post summit to include updated details on > hierarchy (ln134) when using keystone to store the limits. This mostly > came from some side discussions in the API-WG room with morgan and > melwitt. > > It includes a small discussion on how the idea behind quota-class-sets > could be turned into something usable, although that is now a problem > for keystone's limits API. > > There were some side discussion around the move to placement meaning > ironic quotas move from vCPU and RAM to custom resource classes. Its > worth noting this largely supersedes the ideas we discussed here in > flavor classes: > http://specs.openstack.org/openstack/nova-specs/specs/ > backlog/approved/flavor-class.html > > I don't currently plan on taking that backlog spec further, as sdague > is going to take moving this all forward. > > Thanks, > John > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >
__________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev