On 8/29/2018 9:31 AM, Naichuan Sun wrote:
Thanks, Matt. Should we create a ticket about it?
Already done:
https://bugs.launchpad.net/nova/+bug/1789654
I'm working on pushing some debug log patches now.
--
Thanks,
Matt
___
Thanks, Matt. Should we create a ticket about it?
BR.
Naichuan Sun
-Original Message-
From: Matt Riedemann [mailto:mriede...@gmail.com]
Sent: Wednesday, August 29, 2018 10:12 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] [placement] XenServer CI failed
On 8/29/2018 8:36 AM, Naichuan Sun wrote:
Hi, Jay,
I have add the configuration and the CI should be OK now.
Just interested in the reason:)
Thanks.
Naichuan Sun
zigo is reporting the same thing in the nova channel this morning, this
was his inventory for the compute node provider:
http://
: [openstack-dev] [nova] [placement] XenServer CI failed frequently
because of placement update
I think the immediate solution would be to just set cpu_allocation_ratio to
16.0 in the nova.CONF that your CI system is using.
Best,
-jay
On 08/29/2018 05:26 AM, Naichuan Sun wrote:
> Hi, Eric and
t: Re: [openstack-dev] [nova] [placement] XenServer CI failed frequently
because of placement update
Yeah, the nova.CONF cpu_allocation_ratio is being overridden to 0.0:
The default there is 0.0[1] - and the passing tempest-full from Zuul on
https://review.openstack.org/#/c/590041/ has the same
pment Mailing List (not for usage questions)
Subject: RE: [openstack-dev] [nova] [placement] XenServer CI failed frequently
because of placement update
Thank you very much for the help, Bob, Jay and Eric.
Naichuan Sun
-Original Message-
From: Bob Ball [mailto:bob.b...@citrix.com]
] XenServer CI failed frequently
because of placement update
> Yeah, the nova.CONF cpu_allocation_ratio is being overridden to 0.0:
The default there is 0.0[1] - and the passing tempest-full from Zuul on
https://review.openstack.org/#/c/590041/ has the same line when reading the
config[2]:
We
> Yeah, the nova.CONF cpu_allocation_ratio is being overridden to 0.0:
The default there is 0.0[1] - and the passing tempest-full from Zuul on
https://review.openstack.org/#/c/590041/ has the same line when reading the
config[2]:
We'll have a dig to see if we can figure out why it's not default
/
-Original Message-
From: Eric Fried [mailto:openst...@fried.cc]
Sent: 28 August 2018 14:22
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [placement] XenServer CI failed frequently
because of placement update
Naichuan-
Are you
On 8/28/2018 9:07 AM, Chris Dent wrote:
On Tue, 28 Aug 2018, Bob Ball wrote:
Just looking at Naichuan's output, I wonder if this is because
allocation_ratio is registered as 0 in the inventory.
Yes.
Whatever happened to cause that is the root, that will throw the
math off into zeroness in lo
On Tue, 28 Aug 2018, Bob Ball wrote:
Just looking at Naichuan's output, I wonder if this is because allocation_ratio
is registered as 0 in the inventory.
Yes.
Whatever happened to cause that is the root, that will throw the
math off into zeroness in lots of different places. The default (if
ling List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [placement] XenServer CI failed frequently
because of placement update
Naichuan-
Are you running with [1]? If you are, the placement logs (at debug
level) should be giving you some useful info. If you're not... pe
Naichuan-
Are you running with [1]? If you are, the placement logs (at debug
level) should be giving you some useful info. If you're not... perhaps
you could pull that in :) Note that it refactors the
_get_provider_ids_matching method completely, so it's possible your
problem will magicall
On 08/28/2018 04:17 AM, Naichuan Sun wrote:
Hi, experts,
XenServer CI failed frequently with an error "No valid host was found. "
for more than a week. I think it is cause by placement update.
Hi Naichuan,
Can you give us a link to the logs a patchset's Citrix XenServer CI that
has failed?
Hi, experts,
XenServer CI failed frequently with an error "No valid host was found. " for
more than a week. I think it is cause by placement update.
It looks ` _get_provider_ids_matching ` return empty when allocate candidates,
but filter statements looks good(vcpu/memory/disk):
coalesce(usage_
15 matches
Mail list logo