We're planning to come up with some solution regarding neutron profiling.
For this purpose etherpad [1] got created.
If anyone wants to join efforts, feel free to contribute.
[1] https://etherpad.openstack.org/p/neutron-profiling
__
Note that neutron no longer relies on giant join conditions so heavily due
to explosions in result sizes with the various many to one relationships as
of about a week ago[1].
We switched to subqueries for the one to many relationships to prevent the
result size explosions, so now we issue a lot mo
We could potentially make that call async on the agent, but the agent has
very little to do without the information in the response that comes back.
As we switch over to push notifications, this method of data retrieval will
be completely gone so we probably don't want to spend much time redesigni
Mike, Team,
Rolling the dice here:
https://review.openstack.org/#/c/435009/
Thanks,
Dims
On Thu, Feb 16, 2017 at 11:35 AM, Mike Bayer wrote:
>
>
> On 02/15/2017 12:46 PM, Daniel Alvarez Sanchez wrote:
>>
>> Also, while having a look at server profiling, around the 33% of the
>> time was spent b
On 02/15/2017 12:46 PM, Daniel Alvarez Sanchez wrote:
Also, while having a look at server profiling, around the 33% of the
time was spent building SQL queries [1]. Mike Bayer went through this
and suggested having a look at baked queries and also submitted a sketch
of his proposal [2].
Neutro
Awesome work, Kevin!
For the DHCP notification, in my profiling I got only 10% of the CPU time
[0] without taking the waiting times into account which it's probably what
you also measured.
Your patch seems like a neat and great optimization :)
Also, since "get_devices_details_list_and_failed_devi
Thanks for the stats and the nice diagram. I did some profiling and I'm
sure it's the RPC handler on the Neutron server-side behaving like garbage.
There are several causes that I have a string of patches up to address that
mainly stem from the fact that l2pop requires multiple port status updates
On 02/15/2017 12:46 PM, Daniel Alvarez Sanchez wrote:
Hi there,
We're trying to figure out why, sometimes, rpc_loop takes over 10
seconds to process an iteration when booting instances. So we deployed
devstack on a 8GB, 4vCPU VM and did some profiling on the following command:
nova boot --flavo
Hi there,
We're trying to figure out why, sometimes, rpc_loop takes over 10 seconds
to process an iteration when booting instances. So we deployed devstack on
a 8GB, 4vCPU VM and did some profiling on the following command:
nova boot --flavor m1.nano --image cirros-0.3.4-x86_64-uec --nic
net-name