I do intend to respond to all the excellent discussion on this thread,
but right now I just want to offer an update on the code:
I've split the effort apart into multiple changes starting at [1]. A few
of these are ready for review.
One opinion was that a specless blueprint would be appropriate.
On 11/5/2018 1:17 PM, Matt Riedemann wrote:
I'm thinking of a case like, resize and instance but rather than
confirm/revert it, the user deletes the instance. That would cleanup the
allocations from the target node but potentially not from the source node.
Well this case is at least not an iss
On 11/5/2018 12:28 PM, Mohammed Naser wrote:
Have you dug into any of the operations around these instances to
determine what might have gone wrong? For example, was a live migration
performed recently on these instances and if so, did it fail? How about
evacuations (rebuild from a down host).
T
On Mon, Nov 5, 2018 at 4:17 PM Matt Riedemann wrote:
>
> On 11/4/2018 4:22 AM, Mohammed Naser wrote:
> > Just for information sake, a clean state cloud which had no reported issues
> > over maybe a period of 2-3 months already has 4 allocations which are
> > incorrect and 12 allocations pointing t
On 11/5/2018 5:52 AM, Chris Dent wrote:
* We need to have further discussion and investigation on
allocations getting out of sync. Volunteers?
This is something I've already spent a lot of time on with the
heal_allocations CLI, and have already started asking mnaser questions
about this el
On 11/4/2018 4:22 AM, Mohammed Naser wrote:
Just for information sake, a clean state cloud which had no reported issues
over maybe a period of 2-3 months already has 4 allocations which are
incorrect and 12 allocations pointing to the wrong resource provider, so I
think this comes does to committ
Thus we should only read from placement:
> * at compute node startup
> * when a write fails
> And we should only write to placement:
> * at compute node startup
> * when the virt driver tells us something has changed
I agree with this.
We could also prepare an interface for operators/other-proje
On Sun, 4 Nov 2018, Jay Pipes wrote:
Now that we have generation markers protecting both providers and consumers,
we can rely on those generations to signal to the scheduler report client
that it needs to pull fresh information about a provider or consumer. So,
there's really no need to automa
Thanks Eric for the patch.
This will help keeping placement calls under control.
Belmiro
On Sun, Nov 4, 2018 at 1:01 PM Jay Pipes wrote:
> On 11/02/2018 03:22 PM, Eric Fried wrote:
> > All-
> >
> > Based on a (long) discussion yesterday [1] I have put up a patch [2]
> > whereby you can set [co
On 11/02/2018 03:22 PM, Eric Fried wrote:
All-
Based on a (long) discussion yesterday [1] I have put up a patch [2]
whereby you can set [compute]resource_provider_association_refresh to
zero and the resource tracker will never* refresh the report client's
provider cache. Philosophically, we're r
Ugh, hit send accidentally. Please take my comments lightly as I have not been
as involved with the developments but just chiming in as an operator with some
ideas.
On Fri, Nov 2, 2018 at 9:32 PM Matt Riedemann wrote:
>
> On 11/2/2018 2:22 PM, Eric Fried wrote:
> > Based on a (long) discussion y
On Fri, Nov 2, 2018 at 9:32 PM Matt Riedemann wrote:
>
> On 11/2/2018 2:22 PM, Eric Fried wrote:
> > Based on a (long) discussion yesterday [1] I have put up a patch [2]
> > whereby you can set [compute]resource_provider_association_refresh to
> > zero and the resource tracker will never* refresh
On 11/2/2018 2:22 PM, Eric Fried wrote:
Based on a (long) discussion yesterday [1] I have put up a patch [2]
whereby you can set [compute]resource_provider_association_refresh to
zero and the resource tracker will never* refresh the report client's
provider cache. Philosophically, we're removing
13 matches
Mail list logo