Re-ran rally and saw this go by: 2017-12-07 00:02:25.399 5255 INFO rally.task.runner [-] Task a502ee62-c31b-4333-8169-f6a3d07d592e | ITER: 74 START 2017-12-07 00:02:25.915 5252 INFO rally.task.runner [-] Task a502ee62-c31b-4333-8169-f6a3d07d592e | ITER: 67 END: OK 2017-12-07 00:02:25.927 5252 INFO rally.task.runner [-] Task a502ee62-c31b-4333-8169-f6a3d07d592e | ITER: 75 START 2017-12-07 00:02:26.202 5254 INFO rally.task.runner [-] Task a502ee62-c31b-4333-8169-f6a3d07d592e | ITER: 41 END: Error ConnectFailure: Unable to establish connection to http://10.245.208.97:9696/v2.0/subnets/d6fe1572-83ca-4f64-a30e-41522471e2f9: ('Connection aborted.', BadStatusLine("''",)) 2017-12-07 00:02:26.217 5254 INFO rally.task.runner [-] Task a502ee62-c31b-4333-8169-f6a3d07d592e | ITER: 76 START 2017-12-07 00:02:26.601 5255 INFO rally.task.runner [-] Task a502ee62-c31b-4333-8169-f6a3d07d592e | ITER: 73 END: OK 2017-12-07 00:02:26.626 5255 INFO rally.task.runner [-] Task a5
BadStatusLine("''",) is the smoking gun. It is almost always haproxy dropping the connection due to one of its timeouts. I highly recommend adding the follwowing configurations. For all the OpenStack API charms: juju confi neutron-api haproxy-server-timeout=90000 haproxy-client- timeout=90000 haproxy-queue-timeout=9000 haproxy-connect-timeout=9000 The defaults are good for non-busy clouds. But once we are stress testing we need to bump up the timeouts so that haproxy does not drop connections. This is what we have running in serverstack. ** Changed in: charm-neutron-gateway Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1736171 Title: create_and_delete_subnets rally test failures Status in OpenStack neutron-gateway charm: Invalid Status in neutron: Invalid Bug description: NeutronNetworks.create_and_delete_subnets is failing when run with concurrency greater than 1. Here's a snippet of a failure: http://paste.ubuntu.com/25927074/ Here is my rally yaml: http://paste.ubuntu.com/26112719/ This is happening using pike on xenial, from the ubuntu cloud archive's. The deployment is distributed across 9 nodes, with HA services. For now we have adjusted our test scenario to be more realistic. When we spread the test over 30 tenants, instead of 3 and if we simulate 2 users per tenant, instead of 3, we do not hit the issue. To manage notifications about this bug go to: https://bugs.launchpad.net/charm-neutron-gateway/+bug/1736171/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp