On 1/20/2016 2:15 PM, Sean Dague wrote:
On 01/20/2016 02:59 PM, Morgan Fainberg wrote:
So this was due to a change in keystonemiddleware. We stopped doing
in-memory caching of tokens per process, per worker by default [1].
There are a couple of reasons:
1) in-memory caching produced unreliabl
Hi,
By the way OSprofiler trace shows how this regression impacts on amount of
DB queries done by Keystone (during the boot of VM):
http://boris-42.github.io/b2.html
Best regards,
Boris Pavlovic
On Wed, Jan 20, 2016 at 3:30 PM, Morgan Fainberg
wrote:
> As promised here are the fixes:
>
>
> h
As promised here are the fixes:
https://review.openstack.org/#/q/Ifc17c27744dac5ad55e84752ca6f68169c2f5a86,n,z
Proposed to both master and liberty.
On Wed, Jan 20, 2016 at 12:15 PM, Sean Dague wrote:
> On 01/20/2016 02:59 PM, Morgan Fainberg wrote:
> > So this was due to a change in keystonemi
On 01/20/2016 02:59 PM, Morgan Fainberg wrote:
> So this was due to a change in keystonemiddleware. We stopped doing
> in-memory caching of tokens per process, per worker by default [1].
> There are a couple of reasons:
>
> 1) in-memory caching produced unreliable validation because some
> process
So this was due to a change in keystonemiddleware. We stopped doing
in-memory caching of tokens per process, per worker by default [1]. There
are a couple of reasons:
1) in-memory caching produced unreliable validation because some processed
may have a cache, some may not
2) in-memory caching was
om> wrote: > Hi all,
>
> From: "Armando M."
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: 2016/01/20 01:59 AM
> Subject: Re: [openstack-dev] [keystone][neutron][requirements] -
> k
might be worth trying things out
with that commit reverted.
stevemar
From: "Armando M."
To: "OpenStack Development Mailing List (not for usage questions)"
Date: 2016/01/20 01:59 AM
Subject: Re: [openstack-dev] [keysto
On 19 January 2016 at 22:46, Kevin Benton wrote:
> Hi all,
>
> We noticed a major jump in the neutron tempest and API test run times
> recently in Neutron. After digging through logstash I found out that it
> first occurred on the requirements bump here:
> https://review.openstack.org/#/c/265697/
Hi all,
We noticed a major jump in the neutron tempest and API test run times
recently in Neutron. After digging through logstash I found out that it
first occurred on the requirements bump here:
https://review.openstack.org/#/c/265697/
After locally testing each requirements change individually,