Jens,
That's quite an interesting catch. I'm reaching out to the author of
this change to get some more information.
Thanks,
Mohammed
On Tue, Nov 14, 2017 at 2:02 PM, Jens Harbott wrote:
> 2017-11-14 16:29 GMT+00:00 Mohammed Naser :
>> Hi everyone,
>>
2017-11-14 16:29 GMT+00:00 Mohammed Naser :
> Hi everyone,
>
> Thank you so much for the work on this, I'm sure we can progress with
> this together. I have noticed that this only occurs in master and
> never in the stable branches. Also, it only occurs under Ubuntu (so
>
Hi everyone,
Thank you so much for the work on this, I'm sure we can progress with
this together. I have noticed that this only occurs in master and
never in the stable branches. Also, it only occurs under Ubuntu (so
maybe something related to mod_wsgi version?)
Given that we don't have any
Yea, I've been scavenging the logs for any kind of indicator on what
might have gone wrong but I can't see anything
related to a deadlock even though I'm very certain that's the issue but
don't know what's causing it.
Perhaps we will need to manually recreate this issue and then
troubleshoot it
2017-11-14 8:24 GMT+00:00 Tobias Urdin :
> Trying to trace this, tempest calls the POST /servers//action
> API endpoint for the nova compute api.
>
> https://github.com/openstack/tempest/blob/master/tempest/lib/services/compute/floating_ips_client.py#L82
>
> Nova then
Am I actually hallucinating or is it the nova API that cannot communicate with
Keystone?
Cannot substantiate this with any logs for keystone.
2017-10-29 23:12:35.521 17800 ERROR nova.api.openstack.compute.floating_ips
[req-7f810cc7-a498-4bf4-b27e-8fc80d652785 42526a28b1a14c629b83908b2d75c647
Trying to trace this, tempest calls the POST /servers//action API
endpoint for the nova compute api.
https://github.com/openstack/tempest/blob/master/tempest/lib/services/compute/floating_ips_client.py#L82
Nova then takes the requests and tries to do this floating ip association using
the
Hello,
Same here, I will continue looking at this aswell.
Would be great if we could get some input from a neutron dev with good insight
into the project.
Can we backtrace the timed out message from where it's thrown/returned.
Error: Request to
Hey,
Do you know if the bug appears on a specific Ubuntu / openstack version?
As far as I remember it was not related to the puppet branch? I mean the
bug is existing on master but also on newton puppet branches, right?
We are using Ubuntu in my company so we would love to see that continue ;)
Hi,
Hope that everyone had safe travels and enjoyed their time at Sydney
(and those who weren't there enjoyed a bit of quiet time!). I'm just
sending this email if anyone had a chance to look more into this (or
perhaps we can get some help if there are any Canonical folks on the
list?)
I would
On Thu, Nov 2, 2017 at 1:02 PM, Tobias Urdin wrote:
> I've been staring at this for almost an hour now going through all the logs
> and I can't really pin a point from
>
> where that error message is generated. Cannot find any references for the
> timed out message that
I've been staring at this for almost an hour now going through all the logs and
I can't really pin a point from
where that error message is generated. Cannot find any references for the timed
out message that the API returns or the unable to associate part.
What I'm currently staring at is
On Mon, Oct 30, 2017 at 6:07 PM, Brian Haley wrote:
> On 10/30/2017 05:46 PM, Matthew Treinish wrote:
>>
>> From a quick glance at the logs my guess is that the issue is related to
>> this stack trace in the l3 agent logs:
>>
>>
>>
On 10/30/2017 05:46 PM, Matthew Treinish wrote:
From a quick glance at the logs my guess is that the issue is related
to this stack trace in the l3 agent logs:
From a quick glance at the logs my guess is that the issue is related to this
stack trace in the l3 agent logs:
Hi everyone,
I'm looking for some help regarding an issue that we're having with
the Puppet OpenStack modules, we've had very inconsistent failures in
the Xenial with the following error:
16 matches
Mail list logo