[Yahoo-eng-team] [Bug 1261755] Re: Error message when image creation fails is insufficient

2019-08-01 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1261755

Title:
  Error message when image creation fails is insufficient

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  Description of problem:

  The error notice is generic without a giving a reason for problem,
  should state failed due to qouta problem.

  
  Version-Release number of selected component (if applicable):

  RHEL 6.5
  python-django-horizon-2013.2-8.el6ost.noarch

  
  How reproducible:
  Awlays. 

  Steps to Reproduce:
  1.Set glance user_total_storage qouta. 
  2.Upload an image file larger than qouta.
  3.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1261755/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1836568] Re: Logs filled with unnecessary policy deprecation warnings

2019-08-01 Thread Colleen Murphy
** Also affects: oslo.policy
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1836568

Title:
  Logs  filled with unnecessary policy deprecation warnings

Status in OpenStack Identity (keystone):
  Triaged
Status in oslo.policy:
  In Progress

Bug description:
  My today master version of keystone log is full with:

  2019-07-15 10:47:25.316828 As of the Stein release, the domain API now 
understands how to handle
  2019-07-15 10:47:25.316831 system-scoped tokens in addition to project-scoped 
tokens, making the API more
  2019-07-15 10:47:25.316834 accessible to users without compromising security 
or manageability for
  2019-07-15 10:47:25.316837 administrators. The new default policies for this 
API account for these changes
  2019-07-15 10:47:25.316840 automatically
  2019-07-15 10:47:25.316843 . Either ensure your deployment is ready for the 
new default or copy/paste the deprecated policy into your policy file and 
maintain it manually.
  2019-07-15 10:47:25.316846   warnings.warn(deprecated_msg)
  2019-07-15 10:47:25.316849 \x1b[00m

  2019-07-15 10:47:25.132244 2019-07-15 10:47:25.131 22582 WARNING py.warnings 
[req-0162c9d3-9953-4b2d-9587-6046651033c3 7b0f3387e0f942f3bae75cea0a5766a3 
98500c83d03e4ba38aa27a78675d2b1b - default default] /usr/lo
  cal/lib/python3.7/site-packages/oslo_policy/policy.py:695: UserWarning: 
Policy "identity:delete_credential":"rule:admin_required" was deprecated in S 
in favor of "identity:delete_credential":"(role:admin and sys
  tem_scope:all) or user_id:%(target.credential.user_id)s". Reason: As of the 
Stein release, the credential API now understands how to handle system-scoped 
tokens in addition to project-scoped tokens, making the A
  PI more accessible to users without compromising security or manageability 
for administrators. The new default policies for this API account for these 
changes automatically.. Either ensure your deployment is rea
  dy for the new default or copy/paste the deprecated policy into your policy 
file and maintain it manually.
  2019-07-15 10:47:25.132262   warnings.warn(deprecated_msg)
  2019-07-15 10:47:25.132266 \x1b[00m
  2019-07-15 10:47:25.132979 2019-07-15 10:47:25.132 22582 WARNING

  
  This is fresh setup from `master` without any policy configuration, therefore 
keystone defaults itself triggers the warning.

  grep -R  'As of the Stein release' keystone-error.log |wc -l
  820

  
  Current master is for `T` , there is no point to have 820 warning (first ~ 10 
minute) for using the keystone default.

  
  Please make these warnings less noise .

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1836568/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1838694] Re: glanceclient doesn't cleanup session it creates if one is not provided

2019-08-01 Thread Alex Schultz
https://review.opendev.org/#/c/674133/

** Project changed: glance => python-glanceclient

** Changed in: python-glanceclient
 Assignee: (unassigned) => Alex Schultz (alex-schultz)

** Changed in: python-glanceclient
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1838694

Title:
  glanceclient doesn't cleanup session it creates if one is not provided

Status in Glance Client:
  In Progress

Bug description:
  If a session object is not provided to the glance client, the
  HTTPClient defined in glanceclient.common.http will create a session
  object. This session object leaks open connections because it is not
  properly closed when the object is no longer needed.  This leads to a
  ResourceWarning about an unclosed socket:

  sys:1: ResourceWarning: unclosed 


  Example code:

  $ cat g.py
  #!/usr/bin/python3 -Wd
  import glanceclient.common.http as h
  client = h.get_http_client(endpoint='https://192.168.24.2:13292',
 token='',
 
cacert='/etc/pki/ca-trust/source/anchors/cm-local-ca.pem',
 insecure=False)
  print(client.get('/v2/images'))

  
  Results in:

  $ ./g.py 
  /usr/lib64/python3.6/importlib/_bootstrap_external.py:426: ImportWarning: Not 
importing directory /usr/lib/python3.6/site-packages/repoze: missing __init__
_warnings.warn(msg.format(portions[0]), ImportWarning)
  /usr/lib64/python3.6/importlib/_bootstrap_external.py:426: ImportWarning: Not 
importing directory /usr/lib/python3.6/site-packages/paste: missing __init__
_warnings.warn(msg.format(portions[0]), ImportWarning)
  /usr/lib/python3.6/site-packages/pytz/__init__.py:499: ResourceWarning: 
unclosed file <_io.TextIOWrapper name='/usr/share/zoneinfo/zone.tab' mode='r' 
encoding='UTF-8'>
for l in open(os.path.join(_tzinfo_dir, 'zone.tab'))
  /usr/lib/python3.6/site-packages/eventlet/patcher.py:1: DeprecationWarning: 
the imp module is deprecated in favour of importlib; see the module's 
documentation for alternative uses
import imp
  (, {'images': [{}], 'first': '/v2/images', 
'schema': '/v2/schemas/images'})
  sys:1: ResourceWarning: unclosed 

  
  This can be mitigated by adding a __del__ function to 
glanceclient.common.http.HTTPClient that closes the session.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-glanceclient/+bug/1838694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1837407] Re: keystone's keystone.conf.memcache socket_timeout isn't actually used

2019-08-01 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/672629
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=4b747fa083ad09ae8ba92dadc63ac0b830c7e562
Submitter: Zuul
Branch:master

commit 4b747fa083ad09ae8ba92dadc63ac0b830c7e562
Author: chenxing 
Date:   Thu Jul 25 13:53:21 2019 +0800

Deprecate keystone.conf.memcache socket_timeout

Change-Id: I5de14b5bd2d96c2f78152eda48842d388109e02b
Partial-Bug: #1838037
Closes-Bug: #1837407


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1837407

Title:
  keystone's keystone.conf.memcache socket_timeout isn't actually used

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Keystone has a timeout configuration option for memcache [0].
  Keystone's definition of this option isn't actually used anywhere, so
  it appears to be a broken knob.

  In fact oslo.cache has a duplicate option that appears to be used
  instead [1].

  We should deprecate the keystone-specific option and point people to
  the oslo.cache option.

  [0] 
https://opendev.org/openstack/keystone/src/commit/a0aa21c237f7b42077fc945f157844deb77be5ef/keystone/conf/memcache.py#L26-L32
  [1] 
https://opendev.org/openstack/oslo.cache/src/commit/a5023ba2754dd537c802d4a59290ff6378bd6285/oslo_cache/_opts.py#L85-L89

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1837407/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1838699] [NEW] Removing a subnet from DVR router also removes DVR MAC flows for other router on network

2019-08-01 Thread Arjun Baindur
Public bug reported:

This bug builds on issue seen in
https://bugs.launchpad.net/neutron/+bug/1838697

In that issue, if you create a tenant network, some VMs, and attach it
to 2 DVR routers, only the DVR MAC rules exist for the first router.

With this issue, simply removing the subnet or deleting the second
router ends up deleting all the DVR MAC flows for the first router. It
deleted both the table=1 and table=60 rules for ALL local endpoints on
that network.

For example:

fa:16:3e:ce:f8:cd = MAC of a VM on this particular host
fa:16:3e:5c:44:da = MAC of router_interface_distributed port of 1st router
fa:16:3e:19:67:9e = MAC of router_interface_distributed port on 2nd router


When simple network is attached to 2 routers:

[r...@pf9-kvm-neutron.platform9.net arjun(admin)]# openstack port list 
--network 8cd0e19e-9041-4a62-9cc9-6bfb5b10f955 --long
+--+--+---+-++--+--+--+
| ID   | Name | MAC Address   | Fixed IP 
Addresses  | Status | 
Security Groups  | Device Owner | 
Tags |
+--+--+---+-++--+--+--+
| 16e971ae-0ce9-4f4a-aaab-6ab3fc71bf93 |  | fa:16:3e:79:66:c8 | 
ip_address='10.23.23.9', subnet_id='f012101e-91ac-4b85-947e-0f9eca83d5e8'   | 
ACTIVE | bd5274ad-2ff9-443a-9226-473cf129e915 | compute:None
 |  |
| 1ef66f53-7818-4281-b407-9be7d55b3b17 |  | fa:16:3e:ce:f8:cd | 
ip_address='10.23.23.7', subnet_id='f012101e-91ac-4b85-947e-0f9eca83d5e8'   | 
ACTIVE | bd5274ad-2ff9-443a-9226-473cf129e915 | compute:None
 |  |
| 21553560-5491-4036-9d03-65d7bedb28dc |  | fa:16:3e:0a:ff:1b | 
ip_address='10.23.23.2', subnet_id='f012101e-91ac-4b85-947e-0f9eca83d5e8'   | 
ACTIVE |  | network:dhcp
 |  |
| 386d3d98-6c86-4748-9c2e-8b60fbe3f6cc |  | fa:16:3e:c9:19:14 | 
ip_address='10.23.23.25', subnet_id='f012101e-91ac-4b85-947e-0f9eca83d5e8'  | 
ACTIVE | bd5274ad-2ff9-443a-9226-473cf129e915 | compute:None
 |  |
| 4e211475-91e0-4627-8342-837210219fbc |  | fa:16:3e:19:67:9e | 
ip_address='10.23.23.199', subnet_id='f012101e-91ac-4b85-947e-0f9eca83d5e8' | 
ACTIVE | ecd04202-0111-4e29-8e2f-39a203123c75 | 
network:router_interface_distributed |  |
| 7be10a79-e581-4ba9-95c9-870e845dbea0 |  | fa:16:3e:0b:9b:e3 | 
ip_address='10.23.23.28', subnet_id='f012101e-91ac-4b85-947e-0f9eca83d5e8'  | 
ACTIVE | bd5274ad-2ff9-443a-9226-473cf129e915 | compute:None
 |  |
| be9d8d83-0c55-49aa-836e-bb4f483bde48 |  | fa:16:3e:21:76:67 | 
ip_address='10.23.23.4', subnet_id='f012101e-91ac-4b85-947e-0f9eca83d5e8'   | 
ACTIVE |  | network:dhcp
 |  |
| d266f85c-14b1-4c47-a357-44cd0fa4b557 |  | fa:16:3e:c4:f0:ce | 
ip_address='10.23.23.3', subnet_id='f012101e-91ac-4b85-947e-0f9eca83d5e8'   | 
ACTIVE |  | network:dhcp
 |  |
| de2fb0b6-9756-4418-8501-be202afbf006 |  | fa:16:3e:e7:f6:6c | 
ip_address='10.23.23.14', subnet_id='f012101e-91ac-4b85-947e-0f9eca83d5e8'  | 
ACTIVE | bd5274ad-2ff9-443a-9226-473cf129e915 | compute:None
 |  |
| f00fa134-da4d-4663-8d94-52de0840f9d4 |  | fa:16:3e:2e:3c:8a | 
ip_address='10.23.23.5', subnet_id='f012101e-91ac-4b85-947e-0f9eca83d5e8'   | 
ACTIVE | bd5274ad-2ff9-443a-9226-473cf129e915 | compute:None
 |  |
| f33d9ba4-cfdc-42f3-aff4-e5221f84ac03 |  | fa:16:3e:c9:86:97 | 
ip_address='10.23.23.6', subnet_id='f012101e-91ac-4b85-947e-0f9eca83d5e8'   | 
ACTIVE | bd5274ad-2ff9-443a-9226-473cf129e915 | compute:None
 |  |
| f763ba3f-fae2-4608-8ef9-10ccc023eacc |  | fa:16:3e:5c:44:da | 
ip_address='10.23.23.1', subnet_id='f012101e-91ac-4b85-947e-0f9eca83d5e8'   | 
ACTIVE |  | 
network:router_interface_distributed |  |
+--+--+---+-++--+--+--+

[root@chef ~]# ovs-ofctl dump-flows br-int | grep fa:16:3e:ce:f8:cd
 cookie=0xbdf055421ffc2398, duration=222.793s, table=1, n_packets=0, n_bytes=0, 
idle_age=1843, priority=4,dl_vlan=13,dl_dst=fa:16:3e:ce:f8:cd 

[Yahoo-eng-team] [Bug 1838697] [NEW] DVR Mac conversion rules are only added for the first router a network is attached to

2019-08-01 Thread Arjun Baindur
Public bug reported:

This is seen on stable/pike, have not checked latest or stein.

1. Create a basic tenant network and create a DVR router, attach them.
Spin up some VMs:

[r...@pf9-kvm-neutron.platform9.net arjun(admin)]# openstack port list 
--network 8cd0e19e-9041-4a62-9cc9-6bfb5b10f955 --long
+--+--+---+++--+--+--+
| ID   | Name | MAC Address   | Fixed IP 
Addresses | Status | 
Security Groups  | Device Owner | 
Tags |
+--+--+---+++--+--+--+
| 16e971ae-0ce9-4f4a-aaab-6ab3fc71bf93 |  | fa:16:3e:79:66:c8 | 
ip_address='10.23.23.9', subnet_id='f012101e-91ac-4b85-947e-0f9eca83d5e8'  | 
ACTIVE | bd5274ad-2ff9-443a-9226-473cf129e915 | compute:None
 |  |
| 1ef66f53-7818-4281-b407-9be7d55b3b17 |  | fa:16:3e:ce:f8:cd | 
ip_address='10.23.23.7', subnet_id='f012101e-91ac-4b85-947e-0f9eca83d5e8'  | 
ACTIVE | bd5274ad-2ff9-443a-9226-473cf129e915 | compute:None
 |  |
| 21553560-5491-4036-9d03-65d7bedb28dc |  | fa:16:3e:0a:ff:1b | 
ip_address='10.23.23.2', subnet_id='f012101e-91ac-4b85-947e-0f9eca83d5e8'  | 
ACTIVE |  | network:dhcp
 |  |
| 386d3d98-6c86-4748-9c2e-8b60fbe3f6cc |  | fa:16:3e:c9:19:14 | 
ip_address='10.23.23.25', subnet_id='f012101e-91ac-4b85-947e-0f9eca83d5e8' | 
ACTIVE | bd5274ad-2ff9-443a-9226-473cf129e915 | compute:None
 |  |
| 7be10a79-e581-4ba9-95c9-870e845dbea0 |  | fa:16:3e:0b:9b:e3 | 
ip_address='10.23.23.28', subnet_id='f012101e-91ac-4b85-947e-0f9eca83d5e8' | 
ACTIVE | bd5274ad-2ff9-443a-9226-473cf129e915 | compute:None
 |  |
| be9d8d83-0c55-49aa-836e-bb4f483bde48 |  | fa:16:3e:21:76:67 | 
ip_address='10.23.23.4', subnet_id='f012101e-91ac-4b85-947e-0f9eca83d5e8'  | 
ACTIVE |  | network:dhcp
 |  |
| d266f85c-14b1-4c47-a357-44cd0fa4b557 |  | fa:16:3e:c4:f0:ce | 
ip_address='10.23.23.3', subnet_id='f012101e-91ac-4b85-947e-0f9eca83d5e8'  | 
ACTIVE |  | network:dhcp
 |  |
| de2fb0b6-9756-4418-8501-be202afbf006 |  | fa:16:3e:e7:f6:6c | 
ip_address='10.23.23.14', subnet_id='f012101e-91ac-4b85-947e-0f9eca83d5e8' | 
ACTIVE | bd5274ad-2ff9-443a-9226-473cf129e915 | compute:None
 |  |
| f00fa134-da4d-4663-8d94-52de0840f9d4 |  | fa:16:3e:2e:3c:8a | 
ip_address='10.23.23.5', subnet_id='f012101e-91ac-4b85-947e-0f9eca83d5e8'  | 
ACTIVE | bd5274ad-2ff9-443a-9226-473cf129e915 | compute:None
 |  |
| f33d9ba4-cfdc-42f3-aff4-e5221f84ac03 |  | fa:16:3e:c9:86:97 | 
ip_address='10.23.23.6', subnet_id='f012101e-91ac-4b85-947e-0f9eca83d5e8'  | 
ACTIVE | bd5274ad-2ff9-443a-9226-473cf129e915 | compute:None
 |  |
| f763ba3f-fae2-4608-8ef9-10ccc023eacc |  | fa:16:3e:5c:44:da | 
ip_address='10.23.23.1', subnet_id='f012101e-91ac-4b85-947e-0f9eca83d5e8'  | 
ACTIVE |  | 
network:router_interface_distributed |  |
+--+--+---+++--+--+--+

2. Check on a host where one of the VMs is. Search for the router port's
MAC in br-int. For inbound packets (arriving from br-tun), there is a
table=1 rule that converts from remote per-host DVR MAC to the common
MAC of the network:router_distributed port:


[root@chef ~]# ovs-ofctl dump-flows br-int | grep fa:16:3e:5c:44:da
 cookie=0xbdf055421ffc2398, duration=256.770s, table=1, n_packets=0, n_bytes=0, 
idle_age=1877, priority=4,dl_vlan=13,dl_dst=fa:16:3e:ce:f8:cd 
actions=mod_dl_src:fa:16:3e:5c:44:da,resubmit(,60)


3. Now create a 2nd router. Attach the same network to this router. Now
notice this network has 2 network:router_interface_distributed ports.
But the DVR MAC conversion rules are missing for this other router MAC.
Only first one is present:

[r...@pf9-kvm-neutron.platform9.net arjun(admin)]# openstack port list 
--network 8cd0e19e-9041-4a62-9cc9-6bfb5b10f955 --long

[Yahoo-eng-team] [Bug 1838694] [NEW] glanceclient doesn't cleanup session it creates if one is not provided

2019-08-01 Thread Alex Schultz
Public bug reported:

If a session object is not provided to the glance client, the HTTPClient
defined in glanceclient.common.http will create a session object. This
session object leaks open connections because it is not properly closed
when the object is no longer needed.  This leads to a ResourceWarning
about an unclosed socket:

sys:1: ResourceWarning: unclosed 


Example code:

$ cat g.py
#!/usr/bin/python3 -Wd
import glanceclient.common.http as h
client = h.get_http_client(endpoint='https://192.168.24.2:13292',
   token='',
   
cacert='/etc/pki/ca-trust/source/anchors/cm-local-ca.pem',
   insecure=False)
print(client.get('/v2/images'))


Results in:

$ ./g.py 
/usr/lib64/python3.6/importlib/_bootstrap_external.py:426: ImportWarning: Not 
importing directory /usr/lib/python3.6/site-packages/repoze: missing __init__
  _warnings.warn(msg.format(portions[0]), ImportWarning)
/usr/lib64/python3.6/importlib/_bootstrap_external.py:426: ImportWarning: Not 
importing directory /usr/lib/python3.6/site-packages/paste: missing __init__
  _warnings.warn(msg.format(portions[0]), ImportWarning)
/usr/lib/python3.6/site-packages/pytz/__init__.py:499: ResourceWarning: 
unclosed file <_io.TextIOWrapper name='/usr/share/zoneinfo/zone.tab' mode='r' 
encoding='UTF-8'>
  for l in open(os.path.join(_tzinfo_dir, 'zone.tab'))
/usr/lib/python3.6/site-packages/eventlet/patcher.py:1: DeprecationWarning: the 
imp module is deprecated in favour of importlib; see the module's documentation 
for alternative uses
  import imp
(, {'images': [{}], 'first': '/v2/images', 
'schema': '/v2/schemas/images'})
sys:1: ResourceWarning: unclosed 


This can be mitigated by adding a __del__ function to 
glanceclient.common.http.HTTPClient that closes the session.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1838694

Title:
  glanceclient doesn't cleanup session it creates if one is not provided

Status in Glance:
  New

Bug description:
  If a session object is not provided to the glance client, the
  HTTPClient defined in glanceclient.common.http will create a session
  object. This session object leaks open connections because it is not
  properly closed when the object is no longer needed.  This leads to a
  ResourceWarning about an unclosed socket:

  sys:1: ResourceWarning: unclosed 


  Example code:

  $ cat g.py
  #!/usr/bin/python3 -Wd
  import glanceclient.common.http as h
  client = h.get_http_client(endpoint='https://192.168.24.2:13292',
 token='',
 
cacert='/etc/pki/ca-trust/source/anchors/cm-local-ca.pem',
 insecure=False)
  print(client.get('/v2/images'))

  
  Results in:

  $ ./g.py 
  /usr/lib64/python3.6/importlib/_bootstrap_external.py:426: ImportWarning: Not 
importing directory /usr/lib/python3.6/site-packages/repoze: missing __init__
_warnings.warn(msg.format(portions[0]), ImportWarning)
  /usr/lib64/python3.6/importlib/_bootstrap_external.py:426: ImportWarning: Not 
importing directory /usr/lib/python3.6/site-packages/paste: missing __init__
_warnings.warn(msg.format(portions[0]), ImportWarning)
  /usr/lib/python3.6/site-packages/pytz/__init__.py:499: ResourceWarning: 
unclosed file <_io.TextIOWrapper name='/usr/share/zoneinfo/zone.tab' mode='r' 
encoding='UTF-8'>
for l in open(os.path.join(_tzinfo_dir, 'zone.tab'))
  /usr/lib/python3.6/site-packages/eventlet/patcher.py:1: DeprecationWarning: 
the imp module is deprecated in favour of importlib; see the module's 
documentation for alternative uses
import imp
  (, {'images': [{}], 'first': '/v2/images', 
'schema': '/v2/schemas/images'})
  sys:1: ResourceWarning: unclosed 

  
  This can be mitigated by adding a __del__ function to 
glanceclient.common.http.HTTPClient that closes the session.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1838694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1838689] [NEW] rpc_workers default value ignores setting of api_workers

2019-08-01 Thread Adam Spiers
Public bug reported:

The help for the rpc_workers config option is:

Number of RPC worker processes for service.  If not specified, the
default is equal to half the number of API workers.

However, this does not accurately describe the current behaviour, which
is to default to half the _default_ number of API workers.  This can
make a big difference; for example on a 256-CPU machine with 256GB of
RAM which has api_workers configured to 8 but rpc_workers not configured
to anything, this will result in 64 RPC workers, which is 8 for every
API worker!

** Affects: neutron
 Importance: Undecided
 Assignee: Adam Spiers (adam.spiers)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Adam Spiers (adam.spiers)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1838689

Title:
  rpc_workers default value ignores setting of api_workers

Status in neutron:
  New

Bug description:
  The help for the rpc_workers config option is:

  Number of RPC worker processes for service.  If not specified, the
  default is equal to half the number of API workers.

  However, this does not accurately describe the current behaviour,
  which is to default to half the _default_ number of API workers.  This
  can make a big difference; for example on a 256-CPU machine with 256GB
  of RAM which has api_workers configured to 8 but rpc_workers not
  configured to anything, this will result in 64 RPC workers, which is 8
  for every API worker!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1838689/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1838666] [NEW] lxml 4.4.0 causes failed tests in nova

2019-08-01 Thread Matthew Thode
Public bug reported:

It looks like it's just a ordering issue for the elements that are
returned.

See https://review.opendev.org/673848 for details on the failure (you
can depend on it for testing fixes as well).

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1838666

Title:
  lxml 4.4.0 causes failed tests in nova

Status in OpenStack Compute (nova):
  New

Bug description:
  It looks like it's just a ordering issue for the elements that are
  returned.

  See https://review.opendev.org/673848 for details on the failure (you
  can depend on it for testing fixes as well).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1838666/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1754062] Re: openstack client does not pass prefixlen when creating subnet

2019-08-01 Thread Bernard Cafarelli
Marking as fix released in openstacksdk with
https://review.opendev.org/#/c/550558/

** Changed in: python-openstacksdk
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1754062

Title:
  openstack client does not pass prefixlen when creating subnet

Status in neutron:
  Fix Released
Status in OpenStack SDK:
  Fix Released

Bug description:
  Version: Pike
  OpenStack Client: 3.12.0

  When testing Subnet Pool functionality, I found that the behavior
  between the openstack and neutron clients is different.

  Subnet pool:

  root@controller01:~# openstack subnet pool show MySubnetPool
  +---+--+
  | Field | Value|
  +---+--+
  | address_scope_id  | None |
  | created_at| 2018-03-07T13:18:22Z |
  | default_prefixlen | 8|
  | default_quota | None |
  | description   |  |
  | id| e49703d8-27f4-4a16-9bf4-91a6cf00fff3 |
  | ip_version| 4|
  | is_default| False|
  | max_prefixlen | 32   |
  | min_prefixlen | 8|
  | name  | MySubnetPool |
  | prefixes  | 172.31.0.0/16|
  | project_id| 9233b6b4f6a54386af63c0a7b8f043c2 |
  | revision_number   | 0|
  | shared| False|
  | tags  |  |
  | updated_at| 2018-03-07T13:18:22Z |
  +---+--+

  When attempting to create a /28 subnet from that pool with the
  openstack client, the following error is observed:

  root@controller01:~# openstack subnet create \
  > --subnet-pool MySubnetPool \
  > --prefix-length 28 \
  > --network MyVLANNetwork2 \
  > MyFlatSubnetFromPool
  HttpException: Internal Server Error (HTTP 500) (Request-ID: 
req-61b3f00a-9764-4bcb-899d-e85d66f54e5a), Failed to allocate subnet: 
Insufficient prefix space to allocate subnet size /8.

  However, the same request is successful with the neutron client:

  root@controller01:~# neutron subnet-create --subnetpool MySubnetPool 
--prefixlen 28 --name MySubnetFromPool MyVLANNetwork2
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  Created a new subnet:
  +---+---+
  | Field | Value |
  +---+---+
  | allocation_pools  | {"start": "172.31.0.2", "end": "172.31.0.14"} |
  | cidr  | 172.31.0.0/28 |
  | created_at| 2018-03-07T13:35:35Z  |
  | description   |   |
  | dns_nameservers   |   |
  | enable_dhcp   | True  |
  | gateway_ip| 172.31.0.1|
  | host_routes   |   |
  | id| 43cb9dda-1b7e-436d-9dc1-5312866a1b63  |
  | ip_version| 4 |
  | ipv6_address_mode |   |
  | ipv6_ra_mode  |   |
  | name  | MySubnetFromPool  |
  | network_id| e01ca743-607c-4a94-9176-b572a46fba84  |
  | project_id| 9233b6b4f6a54386af63c0a7b8f043c2  |
  | revision_number   | 0 |
  | service_types |   |
  | subnetpool_id | e49703d8-27f4-4a16-9bf4-91a6cf00fff3  |
  | tags  |   |
  | tenant_id | 9233b6b4f6a54386af63c0a7b8f043c2  |
  | updated_at| 2018-03-07T13:35:35Z  |
  +---+---+

  The payload is different between these clients - the openstack client
  fails to send the prefixlen key.

  openstack client:

  REQ: curl -g -i -X POST http://controller01:9696/v2.0/subnets -H "User-Agent: 
openstacksdk/0.9.17 keystoneauth1/3.1.0 python-requests/2.18.1 CPython/2.7.12" 
-H "Content-Type: application/json" -H "X-Auth-Token: 

[Yahoo-eng-team] [Bug 1838621] [NEW] [RFE] Configure extra dhcp options via API and per network

2019-08-01 Thread Slawek Kaplonski
Public bug reported:

Currently in Neutron DHCP agent we have config option "dnsmasq_config_file" 
which allows us to define path to dnsmasq config file with some extra dhcp 
options.
Problem is that this file, with same options always, is used for every network 
hosted by dhcp agent. There is no way to provide possibility to set extra dhcp 
options per network.

I think it would be good to add API extension and extend neutron's
network resource with new attribute "dhcp_extra_options" which would
allow to configure those extra options "per network" instead of per
agent.

I propose to still make possibility to use "dnsmasq_config_file" if an
cloud operator would want to force to set some dnsmasq options for all
networks and additionally add config options configured per network in
network object.

** Affects: neutron
 Importance: Wishlist
 Assignee: Slawek Kaplonski (slaweq)
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1838621

Title:
  [RFE] Configure extra dhcp options via API and per network

Status in neutron:
  New

Bug description:
  Currently in Neutron DHCP agent we have config option "dnsmasq_config_file" 
which allows us to define path to dnsmasq config file with some extra dhcp 
options.
  Problem is that this file, with same options always, is used for every 
network hosted by dhcp agent. There is no way to provide possibility to set 
extra dhcp options per network.

  I think it would be good to add API extension and extend neutron's
  network resource with new attribute "dhcp_extra_options" which would
  allow to configure those extra options "per network" instead of per
  agent.

  I propose to still make possibility to use "dnsmasq_config_file" if an
  cloud operator would want to force to set some dnsmasq options for all
  networks and additionally add config options configured per network in
  network object.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1838621/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1633120] Re: [SRU] Nova scheduler tries to assign an already-in-use SRIOV QAT VF to a new instance

2019-08-01 Thread Edward Hope-Morley
Mitaka not backportable so abandoning:

$ git-deps -e mitaka-eol 5c5a6b93a07b0b58f513396254049c17e2883894^!
c2c3b97259258eec3c98feabde3b411b519eae6e

$ git-deps -e mitaka-eol c2c3b97259258eec3c98feabde3b411b519eae6e^!
a023c32c70b5ddbae122636c26ed32e5dcba66b2
74fbff88639891269f6a0752e70b78340cf87e9a
e83842b80b73c451f78a4bb9e7bd5dfcebdefcab
1f259e2a9423a4777f79ca561d5e6a74747a5019
b01187eede3881f72addd997c8fd763ddbc137fc
49d9433c62d74f6ebdcf0832e3a03e544b1d6c83


** Changed in: cloud-archive/mitaka
   Status: Triaged => Won't Fix

** Changed in: nova (Ubuntu Xenial)
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1633120

Title:
  [SRU] Nova scheduler tries to assign an already-in-use SRIOV QAT VF to
  a new instance

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Won't Fix
Status in Ubuntu Cloud Archive ocata series:
  Fix Committed
Status in Ubuntu Cloud Archive queens series:
  Fix Released
Status in Ubuntu Cloud Archive rocky series:
  Fix Released
Status in Ubuntu Cloud Archive stein series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  Fix Committed
Status in OpenStack Compute (nova) pike series:
  Fix Committed
Status in OpenStack Compute (nova) queens series:
  Fix Committed
Status in OpenStack Compute (nova) rocky series:
  Fix Committed
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Xenial:
  Won't Fix
Status in nova source package in Bionic:
  Fix Released
Status in nova source package in Cosmic:
  Fix Released
Status in nova source package in Disco:
  Fix Released
Status in nova source package in Eoan:
  Fix Released

Bug description:
  [Impact]
  This patch is required to prevent nova from accidentally marking pci_device 
allocations as deleted when it incorrectly reads the passthrough whitelist 

  [Test Case]
  * deploy openstack (any version that supports sriov)
  * single compute configured for sriov with at least once device in 
pci_passthrough_whitelist
  * create a vm and attach sriov port
  * remove device from pci_passthrough_whitelist and restart nova-compute
  * check that pci_devices allocations have not been marked as deleted

  [Regression Potential]
  None anticipated
  
  Upon trying to create VM instance (Say A) with one QAT VF, it fails with the 
following error i.e., “Requested operation is not valid: PCI device 
:88:04.7 is in use by driver QEMU, domain instance-0081”. Please note 
that, PCI device :88:04.7 is already being assigned to another VM (Say B) . 
 We have installed openstack-mitaka release on CentO7 system. It has two Intel 
QAT devices. There are 32 VF devices available per QAT Device/DH895xCC device 
Out of 64 VFs, only 8 VFs are allocated (to VM instances) and rest should be 
available.
  But the nova scheduler tries to assign an already-in-use SRIOV VF to a new 
instance and instance fails. It appears that the nova database is not tracking 
which VF's have already been taken. But if I shut down VM B instance, then 
other instance VM A boots up and vice-versa. Note that, both the VM instances 
cannot run simultaneously because of the aforesaid issue.

  We should always be able to create as many instances with the
  requested PCI devices as there are available VFs.

  Please feel free to let me know if additional information is needed.
  Can anyone please suggest why it tries to assign same PCI device which
  has been assigned already? Is there any way to resolve this issue?
  Thank you in advance for your support and help.

  [root@localhost ~(keystone_admin)]# lspci -d:435
  83:00.0 Co-processor: Intel Corporation DH895XCC Series QAT
  88:00.0 Co-processor: Intel Corporation DH895XCC Series QAT
  [root@localhost ~(keystone_admin)]#

  [root@localhost ~(keystone_admin)]# lspci -d:443 | grep "QAT Virtual 
Function" | wc -l
  64
  [root@localhost ~(keystone_admin)]#

  [root@localhost ~(keystone_admin)]# mysql -u root nova -e "SELECT 
hypervisor_hostname, address, instance_uuid, status FROM pci_devices JOIN 
compute_nodes oncompute_nodes.id=compute_node_id" | grep :88:04.7
  localhost  :88:04.7e10a76f3-e58e-4071-a4dd-7a545e8000deallocated
  localhost  :88:04.7c3dbac90-198d-4150-ba0f-a80b912d8021allocated
  localhost  :88:04.7c7f6adad-83f0-4881-b68f-6d154d565ce3allocated
  localhost.nfv.benunets.com :88:04.7
0c3c11a5-f9a4-4f0d-b120-40e4dde843d4allocated
  [root@localhost ~(keystone_admin)]#

  [root@localhost ~(keystone_admin)]# grep -r 
e10a76f3-e58e-4071-a4dd-7a545e8000de /etc/libvirt/qemu
  /etc/libvirt/qemu/instance-0081.xml:  
e10a76f3-e58e-4071-a4dd-7a545e8000de
  

[Yahoo-eng-team] [Bug 1829062] Re: nova placement api non-responsive due to eventlet error

2019-08-01 Thread Bogdan Dobrelya
** Changed in: tripleo
   Status: Fix Released => Triaged

** Changed in: tripleo
 Assignee: Bogdan Dobrelya (bogdando) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1829062

Title:
  nova placement api non-responsive due to eventlet error

Status in OpenStack Compute (nova):
  New
Status in StarlingX:
  Fix Released
Status in tripleo:
  Triaged

Bug description:
  In starlingx setup, we're running a nova docker image based on nova 
stable/stein as of May 6.
  We're seeing nova-compute processes stalling and not creating resource 
providers with placement.
  openstack hypervisor list
  ++-+-+-+---+
  | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
  ++-+-+-+---+
  | 5  | worker-1| QEMU| 192.168.206.247 | down  |
  | 8  | worker-2| QEMU| 192.168.206.211 | down  |
  ++-+-+-+---+

  Observe this error in nova-placement-api logs related to eventlet at same 
time:
  2019-05-14 00:44:03.636229 Traceback (most recent call last):
  2019-05-14 00:44:03.636276 File 
"/var/lib/openstack/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 
460, in fire_timers
  2019-05-14 00:44:03.636536 timer()
  2019-05-14 00:44:03.636560 File 
"/var/lib/openstack/lib/python2.7/site-packages/eventlet/hubs/timer.py", line 
59, in _call_
  2019-05-14 00:44:03.636647 cb(*args, **kw)
  2019-05-14 00:44:03.636661 File 
"/var/lib/openstack/lib/python2.7/site-packages/eventlet/semaphore.py", line 
147, in _do_acquire
  2019-05-14 00:44:03.636774 waiter.switch()
  2019-05-14 00:44:03.636792 error: cannot switch to a different thread

  This is a new behaviour for us in stable/stein and suspect this is due to 
merge of eventlet related change on May 4:
  
https://github.com/openstack/nova/commit/6755034e109079fb5e8bbafcd611a919f0884d14

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1829062/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1829062] Re: nova placement api non-responsive due to eventlet error

2019-08-01 Thread Bogdan Dobrelya
** Changed in: tripleo
   Status: In Progress => Fix Released

** Tags added: queens-backport-potential rocky-backport-potential stein-
backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1829062

Title:
  nova placement api non-responsive due to eventlet error

Status in OpenStack Compute (nova):
  New
Status in StarlingX:
  Fix Released
Status in tripleo:
  Fix Released

Bug description:
  In starlingx setup, we're running a nova docker image based on nova 
stable/stein as of May 6.
  We're seeing nova-compute processes stalling and not creating resource 
providers with placement.
  openstack hypervisor list
  ++-+-+-+---+
  | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
  ++-+-+-+---+
  | 5  | worker-1| QEMU| 192.168.206.247 | down  |
  | 8  | worker-2| QEMU| 192.168.206.211 | down  |
  ++-+-+-+---+

  Observe this error in nova-placement-api logs related to eventlet at same 
time:
  2019-05-14 00:44:03.636229 Traceback (most recent call last):
  2019-05-14 00:44:03.636276 File 
"/var/lib/openstack/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 
460, in fire_timers
  2019-05-14 00:44:03.636536 timer()
  2019-05-14 00:44:03.636560 File 
"/var/lib/openstack/lib/python2.7/site-packages/eventlet/hubs/timer.py", line 
59, in _call_
  2019-05-14 00:44:03.636647 cb(*args, **kw)
  2019-05-14 00:44:03.636661 File 
"/var/lib/openstack/lib/python2.7/site-packages/eventlet/semaphore.py", line 
147, in _do_acquire
  2019-05-14 00:44:03.636774 waiter.switch()
  2019-05-14 00:44:03.636792 error: cannot switch to a different thread

  This is a new behaviour for us in stable/stein and suspect this is due to 
merge of eventlet related change on May 4:
  
https://github.com/openstack/nova/commit/6755034e109079fb5e8bbafcd611a919f0884d14

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1829062/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1838617] [NEW] ssh connection getting dropped frequently

2019-08-01 Thread Jagatjot Singh
Public bug reported:

Openstack version - pike Installation process - multinode kolla

We are frequently facing ssh connection loss on our VMs which are
running in our production environment. RAM memory usage on our
controller nodes becomes low because of which memory automatically
shifts to swap. Moreover we checked the CPU utilization of
neutron-l3-agents by running docker stats command. Neutron-l3-agent cpu
utilization gives lots of spikes varying from 25% to 250%. We have also
verified all the services of openstack and all the services are running
fine without showing any errors in the logs. However ssh connections on
all the VMs gets dropped after 5-10 seconds without giving any error.
Could you confirm us the exact reason why we are facing this issue ?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1838617

Title:
  ssh connection getting dropped frequently

Status in neutron:
  New

Bug description:
  Openstack version - pike Installation process - multinode kolla

  We are frequently facing ssh connection loss on our VMs which are
  running in our production environment. RAM memory usage on our
  controller nodes becomes low because of which memory automatically
  shifts to swap. Moreover we checked the CPU utilization of
  neutron-l3-agents by running docker stats command. Neutron-l3-agent
  cpu utilization gives lots of spikes varying from 25% to 250%. We have
  also verified all the services of openstack and all the services are
  running fine without showing any errors in the logs. However ssh
  connections on all the VMs gets dropped after 5-10 seconds without
  giving any error. Could you confirm us the exact reason why we are
  facing this issue ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1838617/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1838606] [NEW] Incorrect Error message when user try to 'Create Application Credential' with past date

2019-08-01 Thread Vishal Manchanda
Public bug reported:

When User tries to 'Create  Application Credential' with a past/old
date, User gets an error message "Unable to create application
credential" on GUI but IMO we need to raise an error message like
keystone "The 'expires_at' must not be before now".

** Affects: horizon
 Importance: Undecided
 Assignee: Vishal Manchanda (vishalmanchanda)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1838606

Title:
  Incorrect Error message when user try to 'Create  Application
  Credential'  with past date

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When User tries to 'Create  Application Credential' with a past/old
  date, User gets an error message "Unable to create application
  credential" on GUI but IMO we need to raise an error message like
  keystone "The 'expires_at' must not be before now".

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1838606/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1837529] Re: Cannot use push-notification with custom objects

2019-08-01 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/672261
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=2db02023eec96d825749490d40c750a37b97daec
Submitter: Zuul
Branch:master

commit 2db02023eec96d825749490d40c750a37b97daec
Author: Roman Dobosz 
Date:   Tue Jul 23 10:45:19 2019 +

Initialize modifiable list of resources in CacheBackedPluginApi.

Currently, if one wanted to add any other resources (including custom
objects), there is no simple way to achieve that, since list of defined
resource types is hardcoded in create_cache_for_l2_agent function,
which is called in __init__ of the CacheBackedPluginApi. Even if we
derive from it, we must call super() on descendant, otherwise we end up
with uninitialized PluginApi part. But if we do the super() on it, we
end up on having hardcoded resources only, and creating a new remote
resource cache object will make a new set of listeners, while the
listeners for the old object still exist, and may cause memory leaks.
RemoteResourceWatcher class have only initializers for those listeners,
and there is no obvious way to stop/clean them.

In this patch we propose to move create_cache_for_l2_agent function to
CacheBackedPluginApi class, and make resource list to be class
attribute, so that it can be easily modified.

Change-Id: Ia65ecaf7b48926b74505226a5922b85e2cb593a6
Closes-Bug: 1837529


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1837529

Title:
  Cannot use push-notification with custom objects

Status in neutron:
  Fix Released

Bug description:
  We have custom object which we would like to have updated in remote
  resource cache. Currently, in CacheBackedPluginApi resource cache is
  created on initialization by create_cache_for_l2_agent function which
  have fixed list of resources to subscribe.

  If we want to use additional type of resource, there is no other way,
  than either copy entire class and use custom cache creation function,
  or alter the list in the neutron code, which is bad either.

  This isn't a bug, but rather it's an annoying inconvenience, which
  might be easily fixed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1837529/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp