Adam Young shared a patch to convert the tree back to a linear list:
https://review.openstack.org/#/c/205266/
This shouldn't be merged without benchmarking as it's purely a
performance-oriented change.
On Thu, Jul 23, 2015 at 11:40 AM, Matt Fischer m...@mattfischer.com wrote:
Morgan asked
Thanks Dolph,
I won't be back at work until mid August, but I will try it then from a
benchmark POV.
On Mon, Jul 27, 2015 at 10:12 AM, Dolph Mathews dolph.math...@gmail.com
wrote:
Adam Young shared a patch to convert the tree back to a linear list:
https://review.openstack.org/#/c/205266/
On Wed, Jul 22, 2015 at 10:06 PM, Adam Young ayo...@redhat.com wrote:
On 07/22/2015 05:39 PM, Adam Young wrote:
On 07/22/2015 03:41 PM, Morgan Fainberg wrote:
This is an indicator that the bottleneck is not the db strictly speaking,
but also related to the way we match. This means we need
On 07/23/2015 10:45 AM, Lance Bragstad wrote:
On Wed, Jul 22, 2015 at 10:06 PM, Adam Young ayo...@redhat.com
mailto:ayo...@redhat.com wrote:
On 07/22/2015 05:39 PM, Adam Young wrote:
On 07/22/2015 03:41 PM, Morgan Fainberg wrote:
This is an indicator that the bottleneck is not
Morgan asked me to post some of my numbers here. From my staging
environment:
With 0 revocations:
Requests per second:104.67 [#/sec] (mean)
Time per request: 191.071 [ms] (mean)
With 500 revocations:
Requests per second:52.48 [#/sec] (mean)
Time per request: 381.103 [ms]
Dolph,
Per our IRC discussion, I was unable to see any performance improvement
here although not calling DELETE so often will reduce the number of
deadlocks when we're under heavy load especially given the globally
replicated DB we use.
On Tue, Jul 21, 2015 at 5:26 PM, Dolph Mathews
This is an indicator that the bottleneck is not the db strictly speaking, but
also related to the way we match. This means we need to spend some serious
cycles on improving both the stored record(s) for revocation events and the
matching algorithm.
Sent via mobile
On Jul 22, 2015, at 11:51,
On 07/22/2015 03:41 PM, Morgan Fainberg wrote:
This is an indicator that the bottleneck is not the db strictly
speaking, but also related to the way we match. This means we need to
spend some serious cycles on improving both the stored record(s) for
revocation events and the matching
On 07/22/2015 05:39 PM, Adam Young wrote:
On 07/22/2015 03:41 PM, Morgan Fainberg wrote:
This is an indicator that the bottleneck is not the db strictly
speaking, but also related to the way we match. This means we need to
spend some serious cycles on improving both the stored record(s) for
Dolph,
Excuse the delayed reply, was waiting for a brilliant solution from
someone. Without one, personally I'd prefer the cronjob as it seems to be
the type of thing cron was designed for. That will be a painful change as
people now rely on this behavior so I don't know if its feasible. I will
Well, you might be in luck! Morgan Fainberg actually implemented an
improvement that was apparently documented by Adam Young way back in March:
https://bugs.launchpad.net/keystone/+bug/1287757
There's a link to the stable/kilo backport in comment #2 - I'd be eager to
hear how it performs for
On Wed, Jul 15, 2015 at 4:51 PM, Matt Fischer m...@mattfischer.com wrote:
I'm having some issues with keystone revocation events. The bottom line is
that due to the way keystone handles the clean-up of these events[1],
having more than a few leads to:
- bad performance, up to 2x slower
I'm having some issues with keystone revocation events. The bottom line is
that due to the way keystone handles the clean-up of these events[1],
having more than a few leads to:
- bad performance, up to 2x slower token validation with about 600 events
based on my perf measurements.
- database
13 matches
Mail list logo