Re: [Openstack-operators] nova-placement-api tuning

2018-04-03 Thread Alex Schultz
On Tue, Apr 3, 2018 at 4:48 AM, Chris Dent wrote: > On Mon, 2 Apr 2018, Alex Schultz wrote: > >> So this is/was valid. A few years back there was some perf tests done >> with various combinations of process/threads and for Keystone it was >> determined that threads should be 1 while you should adj

Re: [Openstack-operators] nova-placement-api tuning

2018-04-03 Thread Jay Pipes
On 04/03/2018 06:48 AM, Chris Dent wrote: On Mon, 2 Apr 2018, Alex Schultz wrote: So this is/was valid. A few years back there was some perf tests done with various combinations of process/threads and for Keystone it was determined that threads should be 1 while you should adjust the process co

Re: [Openstack-operators] nova-placement-api tuning

2018-04-03 Thread Chris Dent
On Mon, 2 Apr 2018, Alex Schultz wrote: So this is/was valid. A few years back there was some perf tests done with various combinations of process/threads and for Keystone it was determined that threads should be 1 while you should adjust the process count (hence the bug). Now I guess the questi

Re: [Openstack-operators] nova-placement-api tuning

2018-04-02 Thread Alex Schultz
On Fri, Mar 30, 2018 at 11:11 AM, iain MacDonnell wrote: > > > On 03/29/2018 02:13 AM, Belmiro Moreira wrote: >> >> Some lessons so far... >> - Scale keystone accordingly when enabling placement. > > > Speaking of which; I suppose I have the same question for keystone > (currently running under ht

Re: [Openstack-operators] nova-placement-api tuning

2018-03-30 Thread iain MacDonnell
On 03/29/2018 02:13 AM, Belmiro Moreira wrote: Some lessons so far... - Scale keystone accordingly when enabling placement. Speaking of which; I suppose I have the same question for keystone (currently running under httpd also). I'm currently using threads=1, based on this (IIRC): https:/

Re: [Openstack-operators] nova-placement-api tuning

2018-03-29 Thread Chris Dent
On Thu, 29 Mar 2018, iain MacDonnell wrote: If I'm reading http://modwsgi.readthedocs.io/en/develop/user-guides/processes-and-threading.html right, it seems that the MPM is not pertinent when using WSGIDaemonProcess. It doesn't impact the number wsgi processes that will exist or how they are

Re: [Openstack-operators] nova-placement-api tuning

2018-03-29 Thread iain MacDonnell
On 03/29/2018 04:24 AM, Chris Dent wrote: On Thu, 29 Mar 2018, Belmiro Moreira wrote: [lots of great advice snipped] - Change apache mpm default from prefork to event/worker. - Increase the WSGI number of processes/threads considering where placement is running. If I'm reading http://mo

Re: [Openstack-operators] nova-placement-api tuning

2018-03-29 Thread Matt Riedemann
On 3/29/2018 12:05 PM, Chris Dent wrote: Other suggestions? I'm looking at things like turning off scheduler_tracks_instance_changes, since affinity scheduling is not needed (at least so-far), but not sure that that will help with placement load (seems like it might, though?) This won't impac

Re: [Openstack-operators] nova-placement-api tuning

2018-03-29 Thread Chris Dent
On Thu, 29 Mar 2018, iain MacDonnell wrote: placement python stack and kicks out the 401. So this mostly indicates that socket accept is taking forever. Well, this test connects and gets a 400 immediately: echo | nc -v apihost 8778 so I don't think it's at at the socket level, but, I assume,

Re: [Openstack-operators] nova-placement-api tuning

2018-03-29 Thread iain MacDonnell
On 03/29/2018 01:19 AM, Chris Dent wrote: On Wed, 28 Mar 2018, iain MacDonnell wrote: Looking for recommendations on tuning of nova-placement-api. I have a few moderately-sized deployments (~200 nodes, ~4k instances), currently on Ocata, and instance creation is getting very slow as they fi

Re: [Openstack-operators] nova-placement-api tuning

2018-03-29 Thread Chris Dent
On Thu, 29 Mar 2018, Belmiro Moreira wrote: [lots of great advice snipped] - Change apache mpm default from prefork to event/worker. - Increase the WSGI number of processes/threads considering where placement is running. Another option is to switch to nginx and uwsgi. In situations where the

Re: [Openstack-operators] nova-placement-api tuning

2018-03-29 Thread Belmiro Moreira
Hi, with Ocata upgrade we decided to run local placements (one service per cellV1) because we were nervous about possible scalability issues but specially the increase of the schedule time. Fortunately, this is now been address with the placement-req-filter work. We started slowly to aggregate our

Re: [Openstack-operators] nova-placement-api tuning

2018-03-29 Thread Chris Dent
On Wed, 28 Mar 2018, iain MacDonnell wrote: Looking for recommendations on tuning of nova-placement-api. I have a few moderately-sized deployments (~200 nodes, ~4k instances), currently on Ocata, and instance creation is getting very slow as they fill up. This should be well within the capabi

[Openstack-operators] nova-placement-api tuning

2018-03-28 Thread iain MacDonnell
Looking for recommendations on tuning of nova-placement-api. I have a few moderately-sized deployments (~200 nodes, ~4k instances), currently on Ocata, and instance creation is getting very slow as they fill up. I discovered that calls to placement seem to be taking a long time, and even thi