On 10/30/2017 05:46 PM, Matthew Treinish wrote:
From a quick glance at the logs my guess is that the issue is related to this stack trace in the l3 agent logs:

http://logs.openstack.org/47/514347/1/check/puppet-openstack-integration-4-scenario001-tempest-ubuntu-xenial/ed5a657/logs/neutron/neutron-l3-agent.txt.gz?level=TRACE#_2017-10-29_23_11_15_146

I'm not sure what's causing it to complain there. But, I'm on a plane right now (which is why this is a top post, sorry) so I can't really dig much more than that. I'll try to take a deeper look at things later when I'm on solid ground. (hopefully someone will beat me to it by then though)

I don't think that l3-agent trace is it, as the failure is coming from the API. It's actually a trace that's happening due to the async nature of how the agent runs arping, fix is https://review.openstack.org/#/c/507914/ but it only removes the log noise.

http://logs.openstack.org/47/514347/1/check/puppet-openstack-integration-4-scenario001-tempest-ubuntu-xenial/ed5a657/logs/neutron/neutron-server.txt.gz has some tracebacks that look config related, possible missing DB table? But I haven't looked very closely.

-Brian


On October 31, 2017 1:25:55 AM GMT+04:00, Mohammed Naser <mna...@vexxhost.com> wrote:

    Hi everyone,

    I'm looking for some help regarding an issue that we're having with
    the Puppet OpenStack modules, we've had very inconsistent failures in
    the Xenial with the following error:

         
http://logs.openstack.org/47/514347/1/check/puppet-openstack-integration-4-scenario001-tempest-ubuntu-xenial/ed5a657/
         
http://logs.openstack.org/47/514347/1/check/puppet-openstack-integration-4-scenario001-tempest-ubuntu-xenial/ed5a657/logs/testr_results.html.gz
         Details: {u'message': u'Unable to associate floating IP
    172.24.5.17 <http://172.24.5.17>  to fixed IP10.100.0.8 <http://10.100.0.8> 
 for instance
    d265626a-77c1-4d2f-8260-46abe548293e. Error: Request to
    https://127.0.0.1:9696/v2.0/floatingips/2e3fa334-d6ac-443c-b5ba-eeb521d6324c
    timed out', u'code': 400}

    At this point, we're at a bit of a loss.  I've tried my best in order
    to find the root cause however we have not been able to do this.  It
    was persistent enough that we elected to go non-voting for our Xenial
    gates, however, with no fix ahead of us, I feel like this is a waste
    of resources and we need to either fix this or drop CI for Ubuntu.  We
    don't deploy on Ubuntu and most of the developers working on the
    project don't either at this point, so we need a bit of resources.

    If you're a user of Puppet on Xenial, we need your help!  Without any
    resources going to fix this, we'd unfortunately have to drop support
    for Ubuntu because of the lack of resources to maintain it (or
    assistance).  We (Puppet OpenStack team) would be more than happy to
    work together to fix this so pop-in at #puppet-openstack or reply to
    this email and let's get this issue fixed.

    Thanks,
    Mohammed

    ------------------------------------------------------------------------

    OpenStack Development Mailing List (not for usage questions)
    Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to