[Yahoo-eng-team] [Bug 2046892] Re: [OVN] Retrieve the OVN agent extensions correctly

2023-12-21 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/903929
Committed: 
https://opendev.org/openstack/neutron/commit/fa46584af99c677489ee8e92eaabb25b801c1ce7
Submitter: "Zuul (22348)"
Branch:master

commit fa46584af99c677489ee8e92eaabb25b801c1ce7
Author: Rodolfo Alonso Hernandez 
Date:   Tue Dec 19 08:55:10 2023 +

[OVN] Retrieve the OVN agent extensions correctly

Now the OVN agent implements a method ``__getitem__`` that retrieves,
from ``self.ext_manager``, a loaded extension by its name. The method
returns the instantiated extension object.

Closes-Bug: #2046892
Change-Id: Ibb6dc7c9150bf99639d5b6180356963998dc4e49


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2046892

Title:
  [OVN] Retrieve the OVN agent extensions correctly

Status in neutron:
  Fix Released

Bug description:
  The OVN agent extensions are stored in the
  ``OVNNeutronAgent.ext_manager``, that is an instance of
  ``OVNAgentExtensionManager`` (that inherits from
  ``NamedExtensionManager``). In order to retrieve the loaded extension
  object, it is needed to retrieve the extension using the name from the
  extension manager.

  Right now, the QOS HWOL extension is using an OVN agent member
  (``agent.qos_hwol``) that does not exists.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2046892/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2047182] [NEW] BFV VM may be unexpectedly moved to different AZ

2023-12-21 Thread Damian Dąbrowski
Public bug reported:

In cases when:
- each availability zone has a separate storage 
cluster([cinder]/cross_az_attach option helps to achieve that)
and
- there is no default_schedule_zone
VM may be unexpectedly moved to different AZ.

When a VM is created from pre-existing volume, nova places the specific
availability zone in request_specs which prevents a VM from being moved
to different AZ during resize/migrate[1]. In this case, everything works
fine.

Unfortunately, problems start in the following cases:
a) VM is created with --boot-from-volume argument which dynamically creates 
volume for the VM
b) VM has only ephemeral volume

Lets focus on case a) because option b) may be not working "by design".

_get_volume_from_bdms() method considers only pre-existing volumes[2]. Volume 
that will be created later on with `--boot-from-volume` does not exist yet so 
it cannot fetch its availability zone.
As a result, request_specs contains '"availability_zone": null' and VM can be 
moved to different AZ during resize/migrate. Because storage is not shared 
between AZs, it breaks a VM.

It's not easy to fix because:
- nova API is not aware of the designated AZ at the time of placing 
request_specs in DB
- looking at schedule_and_build_instances method[3] we do not create the cinder 
volumes before downcalling to the compute agent. And we do not allow upcalls 
from the compute-agent to the api db in general, so it's hard to update 
request_specs after the volume is created.

Unfortunately, at this point I don't see any easy way to fix this issue.

[1] 
https://github.com/openstack/nova/blob/d28a55959e50b472e181809b919e11a896f989e3/nova/compute/api.py#L1268C19
[2] 
https://github.com/openstack/nova/blob/d28a55959e50b472e181809b919e11a896f989e3/nova/compute/api.py#L1247
[3] 
https://github.com/openstack/nova/blob/master/nova/conductor/manager.py#L1646

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2047182

Title:
  BFV VM may be unexpectedly moved to different AZ

Status in OpenStack Compute (nova):
  New

Bug description:
  In cases when:
  - each availability zone has a separate storage 
cluster([cinder]/cross_az_attach option helps to achieve that)
  and
  - there is no default_schedule_zone
  VM may be unexpectedly moved to different AZ.

  When a VM is created from pre-existing volume, nova places the
  specific availability zone in request_specs which prevents a VM from
  being moved to different AZ during resize/migrate[1]. In this case,
  everything works fine.

  Unfortunately, problems start in the following cases:
  a) VM is created with --boot-from-volume argument which dynamically creates 
volume for the VM
  b) VM has only ephemeral volume

  Lets focus on case a) because option b) may be not working "by
  design".

  _get_volume_from_bdms() method considers only pre-existing volumes[2]. Volume 
that will be created later on with `--boot-from-volume` does not exist yet so 
it cannot fetch its availability zone.
  As a result, request_specs contains '"availability_zone": null' and VM can be 
moved to different AZ during resize/migrate. Because storage is not shared 
between AZs, it breaks a VM.

  It's not easy to fix because:
  - nova API is not aware of the designated AZ at the time of placing 
request_specs in DB
  - looking at schedule_and_build_instances method[3] we do not create the 
cinder volumes before downcalling to the compute agent. And we do not allow 
upcalls from the compute-agent to the api db in general, so it's hard to update 
request_specs after the volume is created.

  Unfortunately, at this point I don't see any easy way to fix this
  issue.

  [1] 
https://github.com/openstack/nova/blob/d28a55959e50b472e181809b919e11a896f989e3/nova/compute/api.py#L1268C19
  [2] 
https://github.com/openstack/nova/blob/d28a55959e50b472e181809b919e11a896f989e3/nova/compute/api.py#L1247
  [3] 
https://github.com/openstack/nova/blob/master/nova/conductor/manager.py#L1646

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2047182/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1508442] Re: LOG.warn is deprecated

2023-12-21 Thread Takashi Kajinami
** Changed in: senlin
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1508442

Title:
  LOG.warn is deprecated

Status in anvil:
  New
Status in Aodh:
  Fix Released
Status in Astara:
  Fix Released
Status in Barbican:
  Fix Released
Status in bilean:
  Fix Released
Status in Ceilometer:
  Fix Released
Status in cloud-init:
  Fix Released
Status in cloudkitty:
  Fix Released
Status in congress:
  Fix Released
Status in Designate:
  Fix Released
Status in django-openstack-auth:
  Fix Released
Status in DragonFlow:
  Fix Released
Status in ec2-api:
  Fix Released
Status in Evoque:
  In Progress
Status in gce-api:
  Fix Released
Status in Gnocchi:
  Fix Released
Status in OpenStack Heat:
  Fix Released
Status in heat-cfntools:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in KloudBuster:
  Fix Released
Status in kolla:
  Fix Released
Status in Magnum:
  Fix Released
Status in OpenStack Shared File Systems Service (Manila):
  Fix Released
Status in masakari:
  Fix Released
Status in Mistral:
  Invalid
Status in Monasca:
  New
Status in networking-arista:
  Fix Released
Status in networking-calico:
  Fix Released
Status in networking-cisco:
  In Progress
Status in networking-fujitsu:
  Fix Released
Status in networking-odl:
  Fix Committed
Status in networking-ofagent:
  Fix Committed
Status in networking-plumgrid:
  In Progress
Status in networking-powervm:
  Fix Released
Status in networking-vsphere:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in nova-powervm:
  Fix Released
Status in nova-solver-scheduler:
  In Progress
Status in octavia:
  Fix Released
Status in openstack-ansible:
  Fix Released
Status in oslo.cache:
  Fix Released
Status in oslo.middleware:
  Fix Released
Status in Packstack:
  Fix Released
Status in python-dracclient:
  Fix Released
Status in python-magnumclient:
  Fix Released
Status in RACK:
  In Progress
Status in python-watcherclient:
  Fix Released
Status in Rally:
  Fix Released
Status in OpenStack Searchlight:
  Fix Released
Status in senlin:
  Fix Released
Status in shaker:
  Fix Released
Status in Solum:
  Fix Released
Status in tacker:
  Fix Released
Status in tempest:
  Fix Released
Status in tripleo:
  Fix Released
Status in trove-dashboard:
  Fix Released
Status in Vitrage:
  Fix Committed
Status in watcher:
  Fix Released
Status in zaqar:
  Fix Released

Bug description:
  LOG.warn is deprecated in Python 3 [1] . But it still used in a few
  places, non-deprecated LOG.warning should be used instead.

  Note: If we are using logger from oslo.log, warn is still valid [2],
  but I agree we can switch to LOG.warning.

  [1]https://docs.python.org/3/library/logging.html#logging.warning
  [2]https://github.com/openstack/oslo.log/blob/master/oslo_log/log.py#L85

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1508442/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1508442] Re: LOG.warn is deprecated

2023-12-21 Thread Takashi Kajinami
This was fixed in python-watcherclient by
https://review.opendev.org/c/openstack/python-watcherclient/+/280026 .

** Changed in: python-watcherclient
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1508442

Title:
  LOG.warn is deprecated

Status in anvil:
  New
Status in Aodh:
  Fix Released
Status in Astara:
  Fix Released
Status in Barbican:
  Fix Released
Status in bilean:
  Fix Released
Status in Ceilometer:
  Fix Released
Status in cloud-init:
  Fix Released
Status in cloudkitty:
  Fix Released
Status in congress:
  Fix Released
Status in Designate:
  Fix Released
Status in django-openstack-auth:
  Fix Released
Status in DragonFlow:
  Fix Released
Status in ec2-api:
  Fix Released
Status in Evoque:
  In Progress
Status in gce-api:
  Fix Released
Status in Gnocchi:
  Fix Released
Status in OpenStack Heat:
  Fix Released
Status in heat-cfntools:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in KloudBuster:
  Fix Released
Status in kolla:
  Fix Released
Status in Magnum:
  Fix Released
Status in OpenStack Shared File Systems Service (Manila):
  Fix Released
Status in masakari:
  Fix Released
Status in Mistral:
  Invalid
Status in Monasca:
  New
Status in networking-arista:
  Fix Released
Status in networking-calico:
  Fix Released
Status in networking-cisco:
  In Progress
Status in networking-fujitsu:
  Fix Released
Status in networking-odl:
  Fix Committed
Status in networking-ofagent:
  Fix Committed
Status in networking-plumgrid:
  In Progress
Status in networking-powervm:
  Fix Released
Status in networking-vsphere:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in nova-powervm:
  Fix Released
Status in nova-solver-scheduler:
  In Progress
Status in octavia:
  Fix Released
Status in openstack-ansible:
  Fix Released
Status in oslo.cache:
  Fix Released
Status in oslo.middleware:
  Fix Released
Status in Packstack:
  Fix Released
Status in python-dracclient:
  Fix Released
Status in python-magnumclient:
  Fix Released
Status in RACK:
  In Progress
Status in python-watcherclient:
  Fix Released
Status in Rally:
  Fix Released
Status in OpenStack Searchlight:
  Fix Released
Status in senlin:
  Fix Committed
Status in shaker:
  Fix Released
Status in Solum:
  Fix Released
Status in tacker:
  Fix Released
Status in tempest:
  Fix Released
Status in tripleo:
  Fix Released
Status in trove-dashboard:
  Fix Released
Status in Vitrage:
  Fix Committed
Status in watcher:
  Fix Released
Status in zaqar:
  Fix Released

Bug description:
  LOG.warn is deprecated in Python 3 [1] . But it still used in a few
  places, non-deprecated LOG.warning should be used instead.

  Note: If we are using logger from oslo.log, warn is still valid [2],
  but I agree we can switch to LOG.warning.

  [1]https://docs.python.org/3/library/logging.html#logging.warning
  [2]https://github.com/openstack/oslo.log/blob/master/oslo_log/log.py#L85

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1508442/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2047135] [NEW] Race condition at container create form

2023-12-21 Thread Vadym Markov
Public bug reported:

Issue manifests when user pastes container name and immediately clicks
confirm button at the container create form. In case of issue created
container named “undefined“ instead of name provided by form. Timeframe
for such behavior is very strict, so it affects mostly tests. User can
hit an issue in case of very slow connection to Horizon.

Most probable cause of issue is $asyncvalidator feature of form. It does
some requests to Swift API to check if such container exists and
triggered by any input to name field. Form submitted until all
validation requests are resolved is invalid. $pending AngularJS feature
should handle it, but it seems to be unsupported in schema-form

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/2047135

Title:
  Race condition at container create form

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Issue manifests when user pastes container name and immediately clicks
  confirm button at the container create form. In case of issue created
  container named “undefined“ instead of name provided by form.
  Timeframe for such behavior is very strict, so it affects mostly
  tests. User can hit an issue in case of very slow connection to
  Horizon.

  Most probable cause of issue is $asyncvalidator feature of form. It
  does some requests to Swift API to check if such container exists and
  triggered by any input to name field. Form submitted until all
  validation requests are resolved is invalid. $pending AngularJS
  feature should handle it, but it seems to be unsupported in schema-
  form

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/2047135/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2047132] [NEW] floating ip on inactive port not shown in Horizon UI floating ip details

2023-12-21 Thread Vincent Gerris
Public bug reported:

When setting up a port that is not bound and assinging a Floating IP
(FIP) to it, the FIP gets associated but the Horizon UI does not show
the IP of the port, instead it shows a - .

The terraform/tofu snippet for the setup:

resource "openstack_networking_floatingip_associate_v2" "fip_1" {
  floating_ip = 
data.openstack_networking_floatingip_v2.fip_1.address
  port_id = openstack_networking_port_v2.port_vip.id
}
resource "openstack_networking_port_v2" "port_vip" {
  name   = "port_vip"
  network_id = 
data.openstack_networking_network_v2.network_1.id
  fixed_ip {
subnet_id  = data.openstack_networking_subnet_v2.subnet_1.id
ip_address = "192.168.56.30"
  }
}

Example from UI :

185.102.215.242 floatit 
stack1-config-barssl-3-hostany-bootstrap-1896c992-3e17-4fab-b084-bb642c517cbe 
192.168.56.20 europe-se-1-1a-net0 Active  
193.93.250.171  -   europe-se-1-1a-net0 Active  

The top one is a port that is asigned to a host that looks as expected,
the second is not and corresponds to the terraform snippet. ( it is
being used as a floating IP internal for load balancing )

Expected is to see the IP 192.168.56.30 that is set at creation.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/2047132

Title:
  floating ip on inactive port not shown in Horizon UI floating ip
  details

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When setting up a port that is not bound and assinging a Floating IP
  (FIP) to it, the FIP gets associated but the Horizon UI does not show
  the IP of the port, instead it shows a - .

  The terraform/tofu snippet for the setup:

  resource "openstack_networking_floatingip_associate_v2" "fip_1" {
floating_ip = 
data.openstack_networking_floatingip_v2.fip_1.address
port_id = openstack_networking_port_v2.port_vip.id
  }
  resource "openstack_networking_port_v2" "port_vip" {
name   = "port_vip"
network_id = 
data.openstack_networking_network_v2.network_1.id
fixed_ip {
  subnet_id  = 
data.openstack_networking_subnet_v2.subnet_1.id
  ip_address = "192.168.56.30"
}
  }

  Example from UI :

185.102.215.242 floatit 
stack1-config-barssl-3-hostany-bootstrap-1896c992-3e17-4fab-b084-bb642c517cbe 
192.168.56.20 europe-se-1-1a-net0 Active  
193.93.250.171  -   europe-se-1-1a-net0 Active  

  The top one is a port that is asigned to a host that looks as
  expected, the second is not and corresponds to the terraform snippet.
  ( it is being used as a floating IP internal for load balancing )

  Expected is to see the IP 192.168.56.30 that is set at creation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/2047132/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2047101] [NEW] The test case for 'rbac_policy_quota' execution reports an error.

2023-12-21 Thread liwenjian
Public bug reported:

When we execute the test case 'test_rbac_policy_quota' in
neutron_tempest_plugin.api.admin.test_shared_network_extension.RBACSharedNetworksTest,
the program throws an exception.

* Step-by-step reproduction steps:
Setup neutron-tempest-plugin
Running the test

(rally)[root@ci /]$ python -m testtools.run
neutron_tempest_plugin.api.admin.test_shared_network_extension.RBACSharedNetworksTest.test_rbac_policy_quota

Traceback (most recent call last):
  File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/tempest/common/utils/__init__.py",
 line 89, in wrapper
return func(*func_args, **func_kwargs)
  File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/neutron_tempest_plugin/api/admin/test_shared_network_extension.py",
 line 378, in test_rbac_policy_quota
self.assertGreater(max_policies, 0)
  File "/usr/lib64/python3.6/unittest/case.py", line 1238, in assertGreater
self.fail(self._formatMessage(msg, standardMsg))
  File "/usr/lib64/python3.6/unittest/case.py", line 687, in fail
raise self.failureException(msg)
AssertionError: -1 not greater than 0Ran 1 test in 12.500s
FAILED (failures=1)

It seems that the setting of the RBAC policy quota caused the exception. When 
the quota is set to -1 by default, the program fails in the judgment.
https://opendev.org/openstack/neutron-tempest-plugin/src/commit/14f44a0c29e3fed721313848f0f3dea2cd023dda/neutron_tempest_plugin/api/admin/test_shared_network_extension.py#L375

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2047101

Title:
  The test case for 'rbac_policy_quota' execution reports an error.

Status in neutron:
  New

Bug description:
  When we execute the test case 'test_rbac_policy_quota' in
  
neutron_tempest_plugin.api.admin.test_shared_network_extension.RBACSharedNetworksTest,
  the program throws an exception.

  * Step-by-step reproduction steps:
  Setup neutron-tempest-plugin
  Running the test

  (rally)[root@ci /]$ python -m testtools.run
  
neutron_tempest_plugin.api.admin.test_shared_network_extension.RBACSharedNetworksTest.test_rbac_policy_quota

  Traceback (most recent call last):
File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/tempest/common/utils/__init__.py",
 line 89, in wrapper
  return func(*func_args, **func_kwargs)
File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/neutron_tempest_plugin/api/admin/test_shared_network_extension.py",
 line 378, in test_rbac_policy_quota
  self.assertGreater(max_policies, 0)
File "/usr/lib64/python3.6/unittest/case.py", line 1238, in assertGreater
  self.fail(self._formatMessage(msg, standardMsg))
File "/usr/lib64/python3.6/unittest/case.py", line 687, in fail
  raise self.failureException(msg)
  AssertionError: -1 not greater than 0Ran 1 test in 12.500s
  FAILED (failures=1)

  It seems that the setting of the RBAC policy quota caused the exception. When 
the quota is set to -1 by default, the program fails in the judgment.
  
https://opendev.org/openstack/neutron-tempest-plugin/src/commit/14f44a0c29e3fed721313848f0f3dea2cd023dda/neutron_tempest_plugin/api/admin/test_shared_network_extension.py#L375

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2047101/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2047100] [NEW] Agent resource cache updates

2023-12-21 Thread LIU Yulong
Public bug reported:

1. Agent resource cache has an infinite growth set: _satisfied_server_queries
https://github.com/openstack/neutron/blob/master/neutron/agent/resource_cache.py#L41

there is no entry removal for this set.

2. Because this set has a non-standard structure, for instance:
set([('Port', ('id', (u'830d035e-5138-49ae-bbe4-324f4096656d',))), ('Network', 
('id', ('e04208de-1006-4a6b-881a-83129856afa6',))), ('Network', ('id', 
('505155ea-8fbb-42d1-a8c9-cc2c78f8476e',))), ('Port', ('id', 
(u'ac825cc9-906a-45db-a77d-4e336fc1c4ea',))), ('Port', ('id', 
(u'c3a72a39-dbd5-4737-a68c-120de93b186c',))), ('Network', ('id', 
('cd5155df-9777-4487-a730-b5ee533c4f80',))), ('Port', ('id', 
(u'340e02f2-fe54-4f31-8858-7f6413fb0010',))), ('Port', ('id', 
(u'64fc4d85-04d6-453f-8d20-f4d1308d34fd',))), ('Network', ('id', 
('a6201723-a237-433c-b357-82a6a24526e5',))), ('Network', ('id', 
('71a80697-1705-4bd0-b65b-0fd7dd616836',))), ('Port', ('security_group_ids', 
('48a2ebb8-16ea-4a0a-9d45-eabc6a6b3dcf',))), ('Network', ('id', 
('7e83c48a-b246-4a02-bb87-10016ac4b47e',))), ('SecurityGroupRule', 
('security_group_id', (u'48a2ebb8-16ea-4a0a-9d45-eabc6a6b3dcf',))), ('Port', 
('id', (u'2cc656ba-b07b-4e85-ad56-ee6da4b2e763',))), ('Port', ('id', 
(u'89d0aab8-82f7-4e5e-98b1-e009e31498ce',))), ('Port',
  ('id', (u'd820dbc2-bf4f-463b-9a67-6b704202bee0',))), ('Network', ('id', 
('aea5771b-9655-4936-b9f4-f94d482c0b15',))), ('Port', ('id', 
(u'68c3e31b-9bf6-45e9-bfbb-3da1cafebcec',)))])

It's hardly to remove all entries for one resource at all, because if
some codes query cache by filter=None, some codes use fitler={"x":y,
"a": b}, the entries are various, especially when the code is not in
Neutron.

3. If the port removed, and added again, because the query is in the
_satisfied_server_queries:
https://github.com/openstack/neutron/blob/master/neutron/agent/resource_cache.py#L75

It may return None or stale resource.

So, it's better to remove such "server_queries" records. Because, if the
resource is in cache, just return it. If it is not, get it from neutron-
server.

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

- 1. Agent resource cache has a infinite growth set: _satisfied_server_queries
+ 1. Agent resource cache has an infinite growth set: _satisfied_server_queries
  
https://github.com/openstack/neutron/blob/master/neutron/agent/resource_cache.py#L41
  
  there is no entry removal for this set.
  
  2. Because this set has a non-standard structure, for instance:
  set([('Port', ('id', (u'830d035e-5138-49ae-bbe4-324f4096656d',))), 
('Network', ('id', ('e04208de-1006-4a6b-881a-83129856afa6',))), ('Network', 
('id', ('505155ea-8fbb-42d1-a8c9-cc2c78f8476e',))), ('Port', ('id', 
(u'ac825cc9-906a-45db-a77d-4e336fc1c4ea',))), ('Port', ('id', 
(u'c3a72a39-dbd5-4737-a68c-120de93b186c',))), ('Network', ('id', 
('cd5155df-9777-4487-a730-b5ee533c4f80',))), ('Port', ('id', 
(u'340e02f2-fe54-4f31-8858-7f6413fb0010',))), ('Port', ('id', 
(u'64fc4d85-04d6-453f-8d20-f4d1308d34fd',))), ('Network', ('id', 
('a6201723-a237-433c-b357-82a6a24526e5',))), ('Network', ('id', 
('71a80697-1705-4bd0-b65b-0fd7dd616836',))), ('Port', ('security_group_ids', 
('48a2ebb8-16ea-4a0a-9d45-eabc6a6b3dcf',))), ('Network', ('id', 
('7e83c48a-b246-4a02-bb87-10016ac4b47e',))), ('SecurityGroupRule', 
('security_group_id', (u'48a2ebb8-16ea-4a0a-9d45-eabc6a6b3dcf',))), ('Port', 
('id', (u'2cc656ba-b07b-4e85-ad56-ee6da4b2e763',))), ('Port', ('id', 
(u'89d0aab8-82f7-4e5e-98b1-e009e31498ce',))), ('Port
 ', ('id', (u'd820dbc2-bf4f-463b-9a67-6b704202bee0',))), ('Network', ('id', 
('aea5771b-9655-4936-b9f4-f94d482c0b15',))), ('Port', ('id', 
(u'68c3e31b-9bf6-45e9-bfbb-3da1cafebcec',)))])
  
  It's hardly to remove all one ports' entry at all, because if some codes
  query cache by filter=None, some codes use fitler={"x":y, "a": b},
  especially when the code is not in Neutron.
  
  3. If the port removed, and added again, because the query is in the
  _satisfied_server_queries:
  
https://github.com/openstack/neutron/blob/master/neutron/agent/resource_cache.py#L75
  
  It may return None or stale resource.
  
  So, it's better to remove such "server_queries" records. Because, if the
  resource is in cache, just return it. If it is not, get it from neutron-
  server.

** Description changed:

  1. Agent resource cache has an infinite growth set: _satisfied_server_queries
  
https://github.com/openstack/neutron/blob/master/neutron/agent/resource_cache.py#L41
  
  there is no entry removal for this set.
  
  2. Because this set has a non-standard structure, for instance:
  set([('Port', ('id', (u'830d035e-5138-49ae-bbe4-324f4096656d',))), 
('Network', ('id', ('e04208de-1006-4a6b-881a-83129856afa6',))), ('Network', 
('id', ('505155ea-8fbb-42d1-a8c9-cc2c78f8476e',))), ('Port', ('id', 
(u'ac825cc9-906a-45db-a77d-4e336fc1c4ea',))), ('Port', ('id', 
(u'c3a72a39-dbd5-4737-a68c-120de93b186c',))), ('Network', ('id', 
('cd5155df-9777-448