On 01/16/2018 08:19 PM, Zhenyu Zheng wrote:
Thanks for the info, so it seems we are not going to implement aggregate
overcommit ratio in placement at least in the near future?
As @edleafe alluded to, we will not be adding functionality to the
placement service to associate an overcommit ratio
On 01/18/2018 03:06 PM, Logan V. wrote:
We have used aggregate based scheduler filters since deploying our
cloud in Kilo. This explains the unpredictable scheduling we have seen
since upgrading to Ocata. Before this post, was there some indication
I missed that these filters can no longer be used
Hi,
On Tue, Jan 16, 2018 at 4:24 PM, melanie witt wrote:
> Hello Stackers,
>
> This is a heads up to any of you using the AggregateCoreFilter,
> AggregateRamFilter, and/or AggregateDiskFilter in the filter scheduler.
> These filters have effectively allowed operators to set overcommit ratios
> pe
On Thu, Jan 18, 2018 at 5:19 PM, Jay Pipes wrote:
> On 01/18/2018 03:54 PM, Mathieu Gagné wrote:
>>
>> Hi,
>>
>> On Tue, Jan 16, 2018 at 4:24 PM, melanie witt wrote:
>>>
>>> Hello Stackers,
>>>
>>> This is a heads up to any of you using the AggregateCoreFilter,
>>> AggregateRamFilter, and/or Aggr
Greetings again, Mathieu, response inline...
On 01/18/2018 07:24 PM, Mathieu Gagné wrote:
So far, a couple challenges/issues:
We used to have fine grain control over the calls a user could make to
the Nova API:
* os_compute_api:os-aggregates:add_host
* os_compute_api:os-aggregates:remove_host
On 01/29/2018 12:40 PM, Chris Friesen wrote:
On 01/29/2018 07:47 AM, Jay Pipes wrote:
What I believe we can do is change the behaviour so that if a 0.0
value is found
in the nova.conf file on the nova-compute worker, then instead of
defaulting to
16.0, the resource tracker would first look to
On 01/29/2018 07:47 AM, Jay Pipes wrote:
What I believe we can do is change the behaviour so that if a 0.0 value is found
in the nova.conf file on the nova-compute worker, then instead of defaulting to
16.0, the resource tracker would first look to see if the compute node was
associated with a h
Hi Jay,
First, thank you very much for the followup. Response inline.
On Mon, Jan 29, 2018 at 8:47 AM, Jay Pipes wrote:
> Greetings again, Mathieu, response inline...
>
> On 01/18/2018 07:24 PM, Mathieu Gagné wrote:
>>
>> So far, a couple challenges/issues:
>>
>> We used to have fine grain cont
On Mon, Jan 29, 2018 at 8:47 AM, Jay Pipes wrote:
>
> What I believe we can do is change the behaviour so that if a 0.0 value is
> found in the nova.conf file on the nova-compute worker, then instead of
> defaulting to 16.0, the resource tracker would first look to see if the
> compute node was as
On 01/29/2018 06:48 PM, Mathieu Gagné wrote:
On Mon, Jan 29, 2018 at 8:47 AM, Jay Pipes wrote:
What I believe we can do is change the behaviour so that if a 0.0 value is
found in the nova.conf file on the nova-compute worker, then instead of
defaulting to 16.0, the resource tracker would first
On 01/29/2018 06:30 PM, Mathieu Gagné wrote:
So lets explore what would looks like a placement centric solution.
(let me know if I get anything wrong)
Here are our main concerns/challenges so far, which I will compare to
our current flow:
1. Compute nodes should not be enabled by default
When
Given the size and detail of this thread, I've tried to summarize the
problems and possible solutions/workarounds in this etherpad:
https://etherpad.openstack.org/p/nova-aggregate-filter-allocation-ratio-snafu
For those working on this, please check that what I have written down is
correct and
On 2/5/2018 9:00 PM, Matt Riedemann wrote:
Given the size and detail of this thread, I've tried to summarize the
problems and possible solutions/workarounds in this etherpad:
https://etherpad.openstack.org/p/nova-aggregate-filter-allocation-ratio-snafu
For those working on this, please chec
13 matches
Mail list logo