[Yahoo-eng-team] [Bug 1749574] Re: [tracking] removal and migration of pycrypto

2023-04-25 Thread Grzegorz Grasza
Closing out bugs created before migration to StoryBoard. Please re-open
if you are of the opinion it is still current.

** Changed in: barbican
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1749574

Title:
  [tracking] removal and migration of pycrypto

Status in Barbican:
  Fix Released
Status in Compass:
  New
Status in daisycloud:
  New
Status in OpenStack Backup/Restore and DR (Freezer):
  New
Status in Fuel for OpenStack:
  New
Status in OpenStack Compute (nova):
  Triaged
Status in openstack-ansible:
  Fix Released
Status in OpenStack Global Requirements:
  Fix Released
Status in pyghmi:
  Fix Committed
Status in Solum:
  Fix Released
Status in Tatu:
  New
Status in OpenStack DBaaS (Trove):
  Fix Released

Bug description:
  trove
  tatu
  barbican
  compass
  daisycloud
  freezer
  fuel
  nova
  openstack-ansible - https://review.openstack.org/544516
  pyghmi - https://review.openstack.org/569073
  solum

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1749574/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818649] Re: [pike] neutron-lbaasv2 with barbican error: LookupError: Container XXXXX could not be found

2023-04-25 Thread Grzegorz Grasza
Closing out bugs created before migration to StoryBoard. Please re-open
if you are of the opinion it is still current.

** Changed in: barbican
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1818649

Title:
  [pike] neutron-lbaasv2 with barbican error:  LookupError: Container
  X could not be found

Status in Barbican:
  Won't Fix
Status in neutron:
  Invalid

Bug description:
  Is there any configuration method about neutron_lbaasv2?. this problem
  troubled me a long time, the doc is not very well. my lbaas'es config
  file has been changed many times, but it didn't work.

  my openstack version is pike.

  It was relate to https://bugs.launchpad.net/barbican/+bug/1689846, but
  my environment is lbaasv2, not octavia.

  I tried create tls listener throuth through lbaasv2, there will be a
  right response on CLI. And neutron-server.log was all right, but there
  will be an ERROR in lbaas-agent.log, the log is bellow.

  

  2019-03-05 18:47:52.427 14045 INFO 
neutron_lbaas.common.cert_manager.barbican_cert_manager 
[req-6e0b798e-b10c-4132-8665-dfe1122133bb cdb0fbe60ff84eaf932ba6a90dd030b2 
502990c9fd4d442693e8d818b01051b5 - - -] Loading certificate container 
http://192.168.10.10:9311/v1/containers/25670926-0f89-42b6-9fe6-05083d59736a 
from Barbican.
  2019-03-05 18:47:52.428 14045 DEBUG barbicanclient.v1.containers 
[req-6e0b798e-b10c-4132-8665-dfe1122133bb cdb0fbe60ff84eaf932ba6a90dd030b2 
502990c9fd4d442693e8d818b01051b5 - - -] Getting container - Container href: 
http://192.168.10.10:9311/v1/containers/25670926-0f89-42b6-9fe6-05083d59736a 
get /usr/lib/python2.7/site-packages/barbicanclient/v1/containers.py:537
  2019-03-05 18:47:52.429 14045 ERROR 
neutron_lbaas.common.cert_manager.barbican_cert_manager 
[req-6e0b798e-b10c-4132-8665-dfe1122133bb cdb0fbe60ff84eaf932ba6a90dd030b2 
502990c9fd4d442693e8d818b01051b5 - - -] Error getting 
http://192.168.10.10:9311/v1/containers/25670926-0f89-42b6-9fe6-05083d59736a: 
LookupError: Container 
http://192.168.10.10:9311/v1/containers/25670926-0f89-42b6-9fe6-05083d59736a 
could not be found.
  2019-03-05 18:47:52.429 14045 ERROR 
neutron_lbaas.common.cert_manager.barbican_cert_manager Traceback (most recent 
call last):
  2019-03-05 18:47:52.429 14045 ERROR 
neutron_lbaas.common.cert_manager.barbican_cert_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/common/cert_manager/barbican_cert_manager.py",
 line 174, in get_cert
  2019-03-05 18:47:52.429 14045 ERROR 
neutron_lbaas.common.cert_manager.barbican_cert_manager 
container_ref=cert_ref
  2019-03-05 18:47:52.429 14045 ERROR 
neutron_lbaas.common.cert_manager.barbican_cert_manager   File 
"/usr/lib/python2.7/site-packages/barbicanclient/v1/containers.py", line 543, 
in get
  2019-03-05 18:47:52.429 14045 ERROR 
neutron_lbaas.common.cert_manager.barbican_cert_manager 
.format(container_ref))
  2019-03-05 18:47:52.429 14045 ERROR 
neutron_lbaas.common.cert_manager.barbican_cert_manager LookupError: Container 
http://192.168.10.10:9311/v1/containers/25670926-0f89-42b6-9fe6-05083d59736a 
could not be found.
  2019-03-05 18:47:52.429 14045 ERROR 
neutron_lbaas.common.cert_manager.barbican_cert_manager
  2019-03-05 18:47:52.430 14045 DEBUG oslo_concurrency.lockutils 
[req-6e0b798e-b10c-4132-8665-dfe1122133bb cdb0fbe60ff84eaf932ba6a90dd030b2 
502990c9fd4d442693e8d818b01051b5 - - -] Lock "haproxy-driver" released by 
"neutron_lbaas.drivers.haproxy.namespace_driver.deploy_instance" :: held 2.682s 
inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:282
  2019-03-05 18:47:52.430 14045 ERROR neutron_lbaas.agent.agent_manager 
[req-6e0b798e-b10c-4132-8665-dfe1122133bb cdb0fbe60ff84eaf932ba6a90dd030b2 
502990c9fd4d442693e8d818b01051b5 - - -] Create listener 
73a3aacc-3e81-4cee-aec9-1f0fa9cb61ca failed on device driver haproxy_ns: 
LookupError: Container 
http://192.168.10.10:9311/v1/containers/25670926-0f89-42b6-9fe6-05083d59736a 
could not be found.
  2019-03-05 18:47:52.430 14045 ERROR neutron_lbaas.agent.agent_manager 
Traceback (most recent call last):
  2019-03-05 18:47:52.430 14045 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/agent/agent_manager.py", line 
303, in create_listener
  2019-03-05 18:47:52.430 14045 ERROR neutron_lbaas.agent.agent_manager 
driver.listener.create(listener)
  2019-03-05 18:47:52.430 14045 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 480, in create
  2019-03-05 18:47:52.430 14045 ERROR neutron_lbaas.agent.agent_manager 
self.driver.loadbalancer.refresh(listener.loadbalancer)
  2019-03-05 18:47:52.430 14045 ERROR 

[Yahoo-eng-team] [Bug 1995287] [NEW] [ML2/OVN] After upgrading from Xena to Yoga neutron-dhcp-agent is not working for Baremetals

2022-10-31 Thread Grzegorz Koper
Public bug reported:

After upgrading from Xena to Yoga dhcp stopped working for Baremetal
instances.

neutron-dhcp-agent is in a weird loop:

https://paste.opendev.org/show/bB3s1Zpd86i2R6ue1sPw/

neutron-server logs :

https://paste.opendev.org/show/bcMFvWkKByzjKhW7v5qO/

agents are reporting up and running:

[stack@server .ssh]$ openstack network agent list  | grep -i dhcp
| 02cecb18-841d-47f2-8b3e-c05134cf17b6 | DHCP agent   | server3 
| nova  | :-)   | UP| neutron-dhcp-agent
 |
| 2768ae2b-3869-4acd-9a83-d5446e62099c | DHCP agent   | server1 
| nova  | :-)   | UP| neutron-dhcp-agent
 |
| 8ed34d8b-30f4-45f3-8bfb-4689f616f5c6 | DHCP agent   | server2 
| nova  | :-)   | UP| neutron-dhcp-agent
 |

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  After upgrading from Xena to Yoga dhcp stopped working for Baremetal
  instances.
  
  neutron-dhcp-agent is in a weird loop:
  
  https://paste.opendev.org/show/bB3s1Zpd86i2R6ue1sPw/
  
  neutron-server logs :
  
  https://paste.opendev.org/show/bcMFvWkKByzjKhW7v5qO/
  
  agents are reporting up and running:
  
- (kayobe) [stack@kef1p-phyucd001 .ssh]$ openstack network agent list  | grep 
-i dhcp
- | 02cecb18-841d-47f2-8b3e-c05134cf17b6 | DHCP agent   | 
kef1p-phycon0003 | nova  | :-)   | UP| 
neutron-dhcp-agent |
- | 2768ae2b-3869-4acd-9a83-d5446e62099c | DHCP agent   | 
kef1p-phycon0001 | nova  | :-)   | UP| 
neutron-dhcp-agent |
- | 8ed34d8b-30f4-45f3-8bfb-4689f616f5c6 | DHCP agent   | 
kef1p-phycon0002 | nova  | :-)   | UP| 
neutron-dhcp-agent |
+ [stack@server .ssh]$ openstack network agent list  | grep -i dhcp
+ | 02cecb18-841d-47f2-8b3e-c05134cf17b6 | DHCP agent   | 
server3 | nova  | :-)   | UP| 
neutron-dhcp-agent |
+ | 2768ae2b-3869-4acd-9a83-d5446e62099c | DHCP agent   | 
server1 | nova  | :-)   | UP| 
neutron-dhcp-agent |
+ | 8ed34d8b-30f4-45f3-8bfb-4689f616f5c6 | DHCP agent   | 
server2 | nova  | :-)   | UP| 
neutron-dhcp-agent |

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1995287

Title:
  [ML2/OVN] After upgrading from Xena to Yoga neutron-dhcp-agent is not
  working for Baremetals

Status in neutron:
  New

Bug description:
  After upgrading from Xena to Yoga dhcp stopped working for Baremetal
  instances.

  neutron-dhcp-agent is in a weird loop:

  https://paste.opendev.org/show/bB3s1Zpd86i2R6ue1sPw/

  neutron-server logs :

  https://paste.opendev.org/show/bcMFvWkKByzjKhW7v5qO/

  agents are reporting up and running:

  [stack@server .ssh]$ openstack network agent list  | grep -i dhcp
  | 02cecb18-841d-47f2-8b3e-c05134cf17b6 | DHCP agent   | 
server3 | nova  | :-)   | UP| 
neutron-dhcp-agent |
  | 2768ae2b-3869-4acd-9a83-d5446e62099c | DHCP agent   | 
server1 | nova  | :-)   | UP| 
neutron-dhcp-agent |
  | 8ed34d8b-30f4-45f3-8bfb-4689f616f5c6 | DHCP agent   | 
server2 | nova  | :-)   | UP| 
neutron-dhcp-agent |

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1995287/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1980058] Re: Openstack keystone LDAP integration | openstack user list --domain domain.com | Internal server error (HTTP 500)

2022-07-18 Thread Grzegorz Grasza
Glad to hear that!

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1980058

Title:
  Openstack keystone LDAP integration | openstack user list --domain
  domain.com | Internal server error (HTTP 500)

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  Description of problem:
  I am trying to integrate AD server in keystone and facing 'Internal server 
error'
  domain configuration:
   [stack@hkg2director ~]$ cat 
workplace/keystone_domain_specific_ldap_backend.yaml
  # This is an example template on how to configure keystone domain specific 
LDAP
  # backends. This will configure a domain called tripleoldap will the 
attributes
  # specified.
  parameter_defaults:
    KeystoneLDAPDomainEnable: true
    KeystoneLDAPBackendConfigs:
  domain.com:
    url: ldap://172.25.161.211
    user: cn=Openstack,ou=Admins,dc=domain,dc=com
    password: password
    suffix: dc=domain,dc=com
    user_tree_dn: ou=APAC,dc=domain,dc=com
    user_filter: 
"(|(memberOf=cn=openstackadmin,ou=Groups,dc=domain,dc=com)(memberOf=cn=openstackeditor,ou=Groups,dc=domain,dc=com)(memberOf=cn=openstackviewer,ou=Groups,dc=domain,dc=com)"
    user_objectclass: person
    user_id_attribute: cn

    group_tree_dn: ou=Groups,dc=domain,dc=com
    group_objectclass: Groups
    group_id_attribute: cn

  When i issue the command:
  $ openstack user list --domain domain.com
  Output: Internal server error  (HTTP 500)

  Keystone_wsgi_error.log:
  [Tue Jun 28 06:46:49.112848 2022] [wsgi:error] [pid 45] [remote 
172.25.201.201:58080] mod_wsgi (pid=45): Exception occurred processing WSGI 
script '/var/www/cgi-bin/keystone/keystone'.
  [Tue Jun 28 06:46:49.121797 2022] [wsgi:error] [pid 45] [remote 
172.25.201.201:58080] Traceback (most recent call last):
  [Tue Jun 28 06:46:49.122202 2022] [wsgi:error] [pid 45] [remote 
172.25.201.201:58080]   File "/usr/lib/python3.6/site-packages/flask/app.py", 
line 2464, in __call__
  [Tue Jun 28 06:46:49.122218 2022] [wsgi:error] [pid 45] [remote 
172.25.201.201:58080] return self.wsgi_app(environ, start_response)
  [Tue Jun 28 06:46:49.122231 2022] [wsgi:error] [pid 45] [remote 
172.25.201.201:58080]   File 
"/usr/lib/python3.6/site-packages/werkzeug/middleware/proxy_fix.py", line 187, 
in __call__
  [Tue Jun 28 06:46:49.122238 2022] [wsgi:error] [pid 45] [remote 
172.25.201.201:58080] return self.app(environ, start_response)
  [Tue Jun 28 06:46:49.122248 2022] [wsgi:error] [pid 45] [remote 
172.25.201.201:58080]   File "/usr/lib/python3.6/site-packages/webob/dec.py", 
line 129, in __call__
  [Tue Jun 28 06:46:49.122254 2022] [wsgi:error] [pid 45] [remote 
172.25.201.201:58080] resp = self.call_func(req, *args, **kw)
  [Tue Jun 28 06:46:49.122264 2022] [wsgi:error] [pid 45] [remote 
172.25.201.201:58080]   File "/usr/lib/python3.6/site-packages/webob/dec.py", 
line 193, in call_func
  [Tue Jun 28 06:46:49.122270 2022] [wsgi:error] [pid 45] [remote 
172.25.201.201:58080] return self.func(req, *args, **kwargs)
  [Tue Jun 28 06:46:49.122284 2022] [wsgi:error] [pid 45] [remote 
172.25.201.201:58080]   File 
"/usr/lib/python3.6/site-packages/oslo_middleware/base.py", line 124, in 
__call__
  [Tue Jun 28 06:46:49.122294 2022] [wsgi:error] [pid 45] [remote 
172.25.201.201:58080] response = req.get_response(self.application)
  [Tue Jun 28 06:46:49.122304 2022] [wsgi:error] [pid 45] [remote 
172.25.201.201:58080]   File 
"/usr/lib/python3.6/site-packages/webob/request.py", line 1314, in send
  [Tue Jun 28 06:46:49.122310 2022] [wsgi:error] [pid 45] [remote 
172.25.201.201:58080] application, catch_exc_info=False)
  [Tue Jun 28 06:46:49.122320 2022] [wsgi:error] [pid 45] [remote 
172.25.201.201:58080]   File 
"/usr/lib/python3.6/site-packages/webob/request.py", line 1278, in 
call_application
  [Tue Jun 28 06:46:49.122326 2022] [wsgi:error] [pid 45] [remote 
172.25.201.201:58080] app_iter = application(self.environ, start_response)
  [Tue Jun 28 06:46:49.122337 2022] [wsgi:error] [pid 45] [remote 
172.25.201.201:58080]   File "/usr/lib/python3.6/site-packages/webob/dec.py", 
line 143, in __call__
  [Tue Jun 28 06:46:49.122344 2022] [wsgi:error] [pid 45] [remote 
172.25.201.201:58080] return resp(environ, start_response)
  [Tue Jun 28 06:46:49.122354 2022] [wsgi:error] [pid 45] [remote 
172.25.201.201:58080]   File "/usr/lib/python3.6/site-packages/webob/dec.py", 
line 129, in __call__
  [Tue Jun 28 06:46:49.122364 2022] [wsgi:error] [pid 45] [remote 
172.25.201.201:58080] resp = self.call_func(req, *args, **kw)
  [Tue Jun 28 06:46:49.122374 2022] [wsgi:error] [pid 45] [remote 
172.25.201.201:58080]   File "/usr/lib/python3.6/site-packages/webob/dec.py", 
line 193, in call_func
  [Tue Jun 28 06:46:49.122382 2022] [wsgi:error] [pid 45] 

[Yahoo-eng-team] [Bug 1965316] [NEW] No traceback on 400 "sequence item 0: expected str instance, bytes found" (TypeError)

2022-03-17 Thread Grzegorz Grasza
Public bug reported:

I have an error in keystone logs, with debug turned on:

2022-03-06 19:32:32.502 173 WARNING keystone.server.flask.application
[req-cb061f74-f48a-4c71-be47-6dda08f65c96
b731e56863e44d3e985aab70f01b054c 2722b2f6760745ab902d73de100b0ef4 -
default default] sequence item 0: expected str instance, bytes found

There is no traceback, which means I can't get any more information to
fix it.

I believe this is because unknown errors are logged as a warning instead
of an exception in _handle_keystone_exception function in the flask
application.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1965316

Title:
  No traceback on 400 "sequence item 0: expected str instance, bytes
  found" (TypeError)

Status in OpenStack Identity (keystone):
  New

Bug description:
  I have an error in keystone logs, with debug turned on:

  2022-03-06 19:32:32.502 173 WARNING keystone.server.flask.application
  [req-cb061f74-f48a-4c71-be47-6dda08f65c96
  b731e56863e44d3e985aab70f01b054c 2722b2f6760745ab902d73de100b0ef4 -
  default default] sequence item 0: expected str instance, bytes found

  There is no traceback, which means I can't get any more information to
  fix it.

  I believe this is because unknown errors are logged as a warning
  instead of an exception in _handle_keystone_exception function in the
  flask application.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1965316/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1953627] [NEW] [ldappool] Downing one of the configured LDAP servers causes a persistent failure

2021-12-08 Thread Grzegorz Grasza
Public bug reported:

If a server disconnects after a pooled connection is created, it fails
withe an ldap.TIMEOUT which is Unhandled.

The ReconnectLDAPObject used by ldappool only catches the
ldap.SERVER_DOWN exception in _apply_method_s which is applied to
synchronous methods, whereas ldap.TIMEOUT is properly caught only during
the initial connection.


To test this I did the following:

1. Create a new interface with a new IP address for the ldap server

2. set this as the first server in the url list and set
pool_connection_timeout in the domain [ldap] configuration

3. Do a couple of:

  time openstack user list --domain=Users

to fill the connections in the pools in all WSGI processes

4. remove the created IP address from the interface

5. try an openstack user list again

Result:

the pooled connection consistently fails with:

gru 08 14:21:01 ggrasza-ubuntu2 devstack@keystone.service[1270797]: CRITICAL 
keystone [None req-7ad1997a-e23e-436b-a9ad-72f68a52abf8 demo admin] Unhandled 
error: ldap.TIMEOUT
gru 08 14:21:01 ggrasza-ubuntu2 devstack@keystone.service[1270797]: ERROR 
keystone Traceback (most recent call last): 
gru 08 14:21:01 ggrasza-ubuntu2 devstack@keystone.service[1270797]: ERROR 
keystone   File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 
2091, in __call__
gru 08 14:21:01 ggrasza-ubuntu2 devstack@keystone.service[1270797]: ERROR 
keystone return self.wsgi_app(environ, start_response)
gru 08 14:21:01 ggrasza-ubuntu2 devstack@keystone.service[1270797]: ERROR 
keystone   File 
"/usr/local/lib/python3.9/dist-packages/werkzeug/middleware/proxy_fix.py", line 
187, in __call__
gru 08 14:21:01 ggrasza-ubuntu2 devstack@keystone.service[1270797]: ERROR 
keystone return self.app(environ, start_response)   

gru 08 14:21:01 ggrasza-ubuntu2 devstack@keystone.service[1270797]: ERROR 
keystone   File "/usr/local/lib/python3.9/dist-packages/webob/dec.py", line 
129, in __call__ 
gru 08 14:21:01 ggrasza-ubuntu2 devstack@keystone.service[1270797]: ERROR 
keystone resp = self.call_func(req, *args, **kw)
 
gru 08 14:21:01 ggrasza-ubuntu2 devstack@keystone.service[1270797]: ERROR 
keystone   File "/usr/local/lib/python3.9/dist-packages/webob/dec.py", line 
193, in call_func   
gru 08 14:21:01 ggrasza-ubuntu2 devstack@keystone.service[1270797]: ERROR 
keystone return self.func(req, *args, **kwargs) 
 
gru 08 14:21:01 ggrasza-ubuntu2 devstack@keystone.service[1270797]: ERROR 
keystone   File 
"/usr/local/lib/python3.9/dist-packages/oslo_middleware/base.py", line 124, in 
__call__
gru 08 14:21:01 ggrasza-ubuntu2 devstack@keystone.service[1270797]: ERROR 
keystone response = req.get_response(self.application)  

gru 08 14:21:01 ggrasza-ubuntu2 devstack@keystone.service[1270797]: ERROR 
keystone   File "/usr/local/lib/python3.9/dist-packages/webob/request.py", line 
1313, in send 
gru 08 14:21:01 ggrasza-ubuntu2 devstack@keystone.service[1270797]: ERROR 
keystone status, headers, app_iter = self.call_application( 
 
gru 08 14:21:01 ggrasza-ubuntu2 devstack@keystone.service[1270797]: ERROR 
keystone   File "/usr/local/lib/python3.9/dist-packages/webob/request.py", line 
1278, in call_application
gru 08 14:21:01 ggrasza-ubuntu2 devstack@keystone.service[1270797]: ERROR 
keystone app_iter = application(self.environ, start_response)   
 
gru 08 14:21:01 ggrasza-ubuntu2 devstack@keystone.service[1270797]: ERROR 
keystone   File "/usr/local/lib/python3.9/dist-packages/webob/dec.py", line 
143, in __call__
gru 08 14:21:01 ggrasza-ubuntu2 devstack@keystone.service[1270797]: ERROR 
keystone return resp(environ, start_response)   
 
gru 08 14:21:01 ggrasza-ubuntu2 devstack@keystone.service[1270797]: ERROR 
keystone   File "/usr/local/lib/python3.9/dist-packages/webob/dec.py", line 
129, in __call__
gru 08 14:21:01 ggrasza-ubuntu2 devstack@keystone.service[1270797]: ERROR 
keystone resp = self.call_func(req, *args, **kw)
 
gru 08 14:21:01 ggrasza-ubuntu2 devstack@keystone.service[1270797]: ERROR 
keystone   File "/usr/local/lib/python3.9/dist-packages/webob/dec.py", line 
193, in call_func 
gru 08 14:21:01 ggrasza-ubuntu2 devstack@keystone.service[1270797]: ERROR 
keystone return self.func(req, *args, **kwargs)  
gru 08 14:21:01 ggrasza-ubuntu2 devstack@keystone.service[1270797]: ERROR 
keystone   File 
"/usr/local/lib/python3.9/dist-packages/oslo_middleware/base.py", line 124, in 
__call__
gru 08 14:21:01 ggrasza-ubuntu2 devstack@keystone.service[1270797]: ERROR 
keystone response = 

[Yahoo-eng-team] [Bug 1953622] [NEW] LDAP Failover behavior is unexpected and random, depending on which server on the configured list fails

2021-12-08 Thread Grzegorz Grasza
Public bug reported:

When the user specifies a list of LDAP servers to connect, both ldappool
and ldap try these in order. Depending on which server fails, this
causes a waiting period of the set timeout. If the first servers on the
list are down, this results in a delay of all requests.

This behavior would be expected, if LDAP is run in HA and keyston
writing to it, but since LDAP is readonly, this shouldn't be the
default.

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: ldap

** Tags added: ldap

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1953622

Title:
  LDAP Failover behavior is unexpected and random, depending on which
  server on the configured list fails

Status in OpenStack Identity (keystone):
  New

Bug description:
  When the user specifies a list of LDAP servers to connect, both
  ldappool and ldap try these in order. Depending on which server fails,
  this causes a waiting period of the set timeout. If the first servers
  on the list are down, this results in a delay of all requests.

  This behavior would be expected, if LDAP is run in HA and keyston
  writing to it, but since LDAP is readonly, this shouldn't be the
  default.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1953622/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1952458] [NEW] create_id_mapping method caches bytes with LDAP backend

2021-11-26 Thread Grzegorz Grasza
Public bug reported:

When connecting to some LDAP servers, the LDAP library returns bytes
data instead of strings, resulting in unexpected errors, ex:

a call to

/v3/projects/x/groups/y/roles/z

results in keystone.exception.GroupNotFound: Could not find group: b'Q'.

After adding more debug logs it was determined that get_id_mapping
returns the LDAP group name as binary type. get_id_mapping is memoized
(@MEMOIZE_ID_MAPPING), the cache is filled not only during the
"memoization" but also inside the create_id_mapping method:

def create_id_mapping(self, local_entity, public_id=None):
public_id = self.driver.create_id_mapping(local_entity, public_id)
if MEMOIZE_ID_MAPPING.should_cache(public_id):
self._get_public_id.set(public_id, self,
local_entity['domain_id'],
local_entity['local_id'],
local_entity['entity_type'])
self.get_id_mapping.set(local_entity, self, public_id)
return public_id

What is cached is the input dictionary, which is passed into the function, 
instead of what the SQL backend returns.
The sql backend transparently converts bytes when inserting data into the 
database, and always returns strings when the data is read.
The intersection of the above causes the unexpected behavior with transient 
errors.

The local_id is returned as bytes from the LDAP backend, but it's
difficult to trace exactly where, without access to the environment with
this specific LDAP software.

** Affects: keystone
 Importance: Undecided
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1952458

Title:
  create_id_mapping method caches bytes with LDAP backend

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  When connecting to some LDAP servers, the LDAP library returns bytes
  data instead of strings, resulting in unexpected errors, ex:

  a call to

  /v3/projects/x/groups/y/roles/z

  results in keystone.exception.GroupNotFound: Could not find group:
  b'Q'.

  After adding more debug logs it was determined that get_id_mapping
  returns the LDAP group name as binary type. get_id_mapping is memoized
  (@MEMOIZE_ID_MAPPING), the cache is filled not only during the
  "memoization" but also inside the create_id_mapping method:

  def create_id_mapping(self, local_entity, public_id=None):
  public_id = self.driver.create_id_mapping(local_entity, public_id)
  if MEMOIZE_ID_MAPPING.should_cache(public_id):
  self._get_public_id.set(public_id, self,
  local_entity['domain_id'],
  local_entity['local_id'],
  local_entity['entity_type'])
  self.get_id_mapping.set(local_entity, self, public_id)
  return public_id

  What is cached is the input dictionary, which is passed into the function, 
instead of what the SQL backend returns.
  The sql backend transparently converts bytes when inserting data into the 
database, and always returns strings when the data is read.
  The intersection of the above causes the unexpected behavior with transient 
errors.

  The local_id is returned as bytes from the LDAP backend, but it's
  difficult to trace exactly where, without access to the environment
  with this specific LDAP software.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1952458/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1928650] Re: RBAC tests in keystone-tempest-plugin broke queens and rocky keystone-dsvm-functional

2021-09-29 Thread Grzegorz Grasza
** Changed in: keystone
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1928650

Title:
  RBAC tests in keystone-tempest-plugin broke queens and rocky keystone-
  dsvm-functional

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  The keystone-dsvm-functional CI tests fail with syntax errors like:

  
  Failed to import test module: keystone_tempest_plugin.tests.rbac.v3.test_user
  Traceback (most recent call last):
File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
  module = self._get_module_from_name(name)
File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
  __import__(name)
File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/keystone_tempest_plugin/tests/rbac/v3/test_user.py",
 line 25
  metaclass=abc.ABCMeta):
   ^
  SyntaxError: invalid syntax

  
  example runs:

  queens:

  https://zuul.opendev.org/t/openstack/build/7829207c56c349ad81ddedb4c042d6f3

  
  rocky:

  https://zuul.opendev.org/t/openstack/build/d725fdae2b4247eb84d3b7a7af7fb067

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1928650/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1929066] [NEW] String length exceeded local_id mapping to LDAP

2021-05-20 Thread Grzegorz Grasza
Public bug reported:

LDAP Group ID may exceed the current table limit:

String length exceeded. The length of string '***' exceeds the limit of
column local_id(CHAR(64)). (HTTP 400) (Request-ID: req-bf68d05f-dc7b-
4f4b-bbb0-d2a11728de86)

>From an upstream bug[1] we had the following solution:

The workaround for this issue is to not use objectGUID as the user or
group ID. However, that workaround might not be applicable in all
situations. For example, the default value for user_id_attribute is
'cn', but if that value spans more than 64 characters, keystone can't
work with it.

But for security reasons, customer can't change the field mapped.

I believe the limit can be safely changed to 255 without impacting other
openstack projects, keystone backends or subsystems.

[1] https://bugs.launchpad.net/keystone/+bug/1889936/comments/1

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1929066

Title:
  String length exceeded local_id mapping to LDAP

Status in OpenStack Identity (keystone):
  New

Bug description:
  LDAP Group ID may exceed the current table limit:

  String length exceeded. The length of string '***' exceeds the limit
  of column local_id(CHAR(64)). (HTTP 400) (Request-ID: req-bf68d05f-
  dc7b-4f4b-bbb0-d2a11728de86)

  From an upstream bug[1] we had the following solution:

  The workaround for this issue is to not use objectGUID as the user or
  group ID. However, that workaround might not be applicable in all
  situations. For example, the default value for user_id_attribute is
  'cn', but if that value spans more than 64 characters, keystone can't
  work with it.

  But for security reasons, customer can't change the field mapped.

  I believe the limit can be safely changed to 255 without impacting
  other openstack projects, keystone backends or subsystems.

  [1] https://bugs.launchpad.net/keystone/+bug/1889936/comments/1

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1929066/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1928650] [NEW] RBAC tests in keystone-tempest-plugin broke queens and rocky keystone-dsvm-functional

2021-05-17 Thread Grzegorz Grasza
Public bug reported:

The keystone-dsvm-functional CI tests fail with syntax errors like:


Failed to import test module: keystone_tempest_plugin.tests.rbac.v3.test_user
Traceback (most recent call last):
  File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
module = self._get_module_from_name(name)
  File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
__import__(name)
  File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/keystone_tempest_plugin/tests/rbac/v3/test_user.py",
 line 25
metaclass=abc.ABCMeta):
 ^
SyntaxError: invalid syntax


example runs:

queens:

https://zuul.opendev.org/t/openstack/build/7829207c56c349ad81ddedb4c042d6f3


rocky:

https://zuul.opendev.org/t/openstack/build/d725fdae2b4247eb84d3b7a7af7fb067

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1928650

Title:
  RBAC tests in keystone-tempest-plugin broke queens and rocky keystone-
  dsvm-functional

Status in OpenStack Identity (keystone):
  New

Bug description:
  The keystone-dsvm-functional CI tests fail with syntax errors like:

  
  Failed to import test module: keystone_tempest_plugin.tests.rbac.v3.test_user
  Traceback (most recent call last):
File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
  module = self._get_module_from_name(name)
File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
  __import__(name)
File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/keystone_tempest_plugin/tests/rbac/v3/test_user.py",
 line 25
  metaclass=abc.ABCMeta):
   ^
  SyntaxError: invalid syntax

  
  example runs:

  queens:

  https://zuul.opendev.org/t/openstack/build/7829207c56c349ad81ddedb4c042d6f3

  
  rocky:

  https://zuul.opendev.org/t/openstack/build/d725fdae2b4247eb84d3b7a7af7fb067

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1928650/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1595475] [NEW] In multinode setup - glance-cache-manage and glance-cache-prefetcher fails to find an image, since it tries the first available glance_api server and fails

2016-06-23 Thread Grzegorz
Public bug reported:

In multinode setup - glance-cache-manage and glance-cache-prefetcher
fails to find an image, since it tries the first available glance_api
server and fails.

I know this problem is known, since there is a opened bug for how cinder 
acceses glance :
https://bugs.launchpad.net/cinder/+bug/1571211

But even if we move glance backend to swift or whatever, we still cant
use glance-cache properly, because of glance lacking replication between
Image directories in multinode setup.

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: glance glance-cache

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1595475

Title:
  In multinode setup - glance-cache-manage and glance-cache-prefetcher
  fails to find an image, since it tries the first available glance_api
  server and fails

Status in Glance:
  New

Bug description:
  In multinode setup - glance-cache-manage and glance-cache-prefetcher
  fails to find an image, since it tries the first available glance_api
  server and fails.

  I know this problem is known, since there is a opened bug for how cinder 
acceses glance :
  https://bugs.launchpad.net/cinder/+bug/1571211

  But even if we move glance backend to swift or whatever, we still cant
  use glance-cache properly, because of glance lacking replication
  between Image directories in multinode setup.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1595475/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1517929] [NEW] Eventlet removal

2015-11-19 Thread Grzegorz Grasza
Public bug reported:

Eventlet has been deprecated since the Kilo release and is scheduled for
removal in Mitaka:

https://review.openstack.org/#/c/157495/

There was discussions about this at the summit:

https://etherpad.openstack.org/p/keystone-mitaka-summit-deprecations

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1517929

Title:
  Eventlet removal

Status in OpenStack Identity (keystone):
  New

Bug description:
  Eventlet has been deprecated since the Kilo release and is scheduled
  for removal in Mitaka:

  https://review.openstack.org/#/c/157495/

  There was discussions about this at the summit:

  https://etherpad.openstack.org/p/keystone-mitaka-summit-deprecations

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1517929/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1509944] [NEW] Rolling upgrades: online schema migration

2015-10-26 Thread Grzegorz Grasza
Public bug reported:

Future incompatible changes in sqlalchemy migrations, like removing,
renaming columns and tables can break rolling upgrades (upgrades when
multiple Keystone instances are run at different versions).

To address this, we can ban schema changes which cause
incompatibilities, specifically drops and alters, like in Nova:

https://github.com/openstack/nova/blob/stable/liberty/nova/tests/unit/db/test_migrations.py#L224-L225

** Affects: keystone
 Importance: Undecided
 Assignee: Grzegorz Grasza (xek)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) => Grzegorz Grasza (xek)

** Changed in: keystone
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1509944

Title:
  Rolling upgrades: online schema migration

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  Future incompatible changes in sqlalchemy migrations, like removing,
  renaming columns and tables can break rolling upgrades (upgrades when
  multiple Keystone instances are run at different versions).

  To address this, we can ban schema changes which cause
  incompatibilities, specifically drops and alters, like in Nova:

  
https://github.com/openstack/nova/blob/stable/liberty/nova/tests/unit/db/test_migrations.py#L224-L225

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1509944/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338403] Re: circular reference detected with exception

2015-03-11 Thread Grzegorz Grasza
*** This bug is a duplicate of bug 1317804 ***
https://bugs.launchpad.net/bugs/1317804

** This bug has been marked a duplicate of bug 1317804
   InstanceActionEvent traceback parameter not serializable

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1338403

Title:
  circular reference detected with exception

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Invalid
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  2014-07-07 02:10:08.727 10283 ERROR oslo.messaging.rpc.dispatcher 
[req-54c68afe-91a8-4a99-86e8-785c0abf7688 ] Exception during message handling: 
Circular reference detected
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 133, in _dispatch_and_reply
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 176, in _dispatch
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 122, in _do_dispatch
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/exception.py, 
line 88, in wrapped
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher payload)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py,
 line 82, in __exit__
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/exception.py, 
line 71, in wrapped
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py,
 line 336, in decorated_function
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
function(self, context, *args, **kwargs)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/utils.py,
 line 437, in __exit__
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
exc_tb=exc_tb)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/objects/base.py, 
line 142, in wrapper
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher args, 
kwargs)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/conductor/rpcapi.py,
 line 355, in object_class_action
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
objver=objver, args=args, kwargs=kwargs)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/client.py,
 line 150, in call
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
wait_for_reply=True, timeout=timeout)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/transport.py,
 line 90, in _send
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
timeout=timeout)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py,
 line 412, in send
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher return 
self._send(target, ctxt, message, wait_for_reply, timeout)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py,
 line 385, in _send
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher msg = 
rpc_common.serialize_msg(msg)
  2014-07-07 02:10:08.727 10283 

[Yahoo-eng-team] [Bug 1338403] Re: circular reference detected with exception

2015-03-10 Thread Grzegorz Grasza
This is a duplicate of #1317804

** Changed in: nova
 Assignee: Grzegorz Grasza (xek) = (unassigned)

** Changed in: nova
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1338403

Title:
  circular reference detected with exception

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Invalid
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  2014-07-07 02:10:08.727 10283 ERROR oslo.messaging.rpc.dispatcher 
[req-54c68afe-91a8-4a99-86e8-785c0abf7688 ] Exception during message handling: 
Circular reference detected
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 133, in _dispatch_and_reply
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 176, in _dispatch
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 122, in _do_dispatch
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/exception.py, 
line 88, in wrapped
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher payload)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py,
 line 82, in __exit__
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/exception.py, 
line 71, in wrapped
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py,
 line 336, in decorated_function
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
function(self, context, *args, **kwargs)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/utils.py,
 line 437, in __exit__
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
exc_tb=exc_tb)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/objects/base.py, 
line 142, in wrapper
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher args, 
kwargs)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/conductor/rpcapi.py,
 line 355, in object_class_action
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
objver=objver, args=args, kwargs=kwargs)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/client.py,
 line 150, in call
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
wait_for_reply=True, timeout=timeout)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/transport.py,
 line 90, in _send
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
timeout=timeout)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py,
 line 412, in send
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher return 
self._send(target, ctxt, message, wait_for_reply, timeout)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py,
 line 385, in _send
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher msg = 
rpc_common.serialize_msg(msg)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher

[Yahoo-eng-team] [Bug 1402574] [NEW] No fault-tolerance in nova-scheduler

2014-12-15 Thread Grzegorz Grasza
Public bug reported:

In the case a nova-scheduler service dies during processing (see below
how to reproduce it), the message is not rescheduled to another one in a
HA setup.

Oslo messaging raises a timeout in the conductor:

2014-12-11 07:49:53.565 ERROR nova.scheduler.driver 
[req-f866a584-ba67-42a8-aec7-5500b631708e admin admin] Exception during 
scheduler.run_instance
 Traceback (most recent call last):
   File /opt/stack/nova/nova/conductor/manager.py, line 640, in 
build_instances
 request_spec, filter_properties)
   File /opt/stack/nova/nova/scheduler/client/__init__.py, line 49, in 
select_destinations
 context, request_spec, filter_properties)
   File /opt/stack/nova/nova/scheduler/client/__init__.py, line 35, in 
__run_method
 return getattr(self.instance, __name)(*args, **kwargs)
   File /opt/stack/nova/nova/scheduler/client/query.py, line 34, in 
select_destinations
 context, request_spec, filter_properties)
   File /opt/stack/nova/nova/scheduler/rpcapi.py, line 118, in 
select_destinations
 request_spec=request_spec, filter_properties=filter_properties)
   File /usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py, 
line 152, in call
 retry=self.retry)
   File /usr/local/lib/python2.7/dist-packages/oslo/messaging/transport.py, 
line 90, in _send
 timeout=timeout, retry=retry)
   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py, 
line 436, in send
 retry=retry)
   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py, 
line 425, in _send
 result = self._waiter.wait(msg_id, timeout)
   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py, 
line 315, in wait
 reply, ending = self._poll_connection(msg_id, timer)
   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py, 
line 264, in _poll_connection
 % msg_id)
 MessagingTimeout: Timed out waiting for a reply to message ID 
aec640c6da0f4cf383b5100ba2441331

The proper behavior would be to at least try once again, even in a
single machine setup - the message will be picked up by another server
or the same one when it restarts.

The Oslo messaging architecture doesn't support this being handled by
the AMQP server, so message rescheduling has to be implemented in Nova
(by the application logic).

To reproduce the error, I added ipdb.set_trace() in
nova/scheduler/filter_scheduler.py:287 before returning selected_hosts
in the _schedule method.

** Affects: nova
 Importance: Undecided
 Assignee: Grzegorz Grasza (xek)
 Status: In Progress


** Tags: nova-conductor nova-scheduler

** Description changed:

  In the case a nova-scheduler server dies during processing (see below
- how I reproduce it), the message is not rescheduled to another one in a
+ how to reproduce it), the message is not rescheduled to another one in a
  HA setup.
  
  Oslo messaging raises a timeout in the conductor:
  
  2014-12-11 07:49:53.565 ERROR nova.scheduler.driver 
[req-f866a584-ba67-42a8-aec7-5500b631708e admin admin] Exception during 
scheduler.run_instance
-  Traceback (most recent call last):
-File /opt/stack/nova/nova/conductor/manager.py, line 640, in 
build_instances
-  request_spec, filter_properties)
-File /opt/stack/nova/nova/scheduler/client/__init__.py, line 49, in 
select_destinations
-  context, request_spec, filter_properties)
-File /opt/stack/nova/nova/scheduler/client/__init__.py, line 35, in 
__run_method
-  return getattr(self.instance, __name)(*args, **kwargs) 
-File /opt/stack/nova/nova/scheduler/client/query.py, line 34, in 
select_destinations
-  context, request_spec, filter_properties)
-File /opt/stack/nova/nova/scheduler/rpcapi.py, line 118, in 
select_destinations
-  request_spec=request_spec, filter_properties=filter_properties)
-File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py, line 
152, in call
-  retry=self.retry)
-File /usr/local/lib/python2.7/dist-packages/oslo/messaging/transport.py, 
line 90, in _send
-  timeout=timeout, retry=retry)
-File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py, 
line 436, in send
-  retry=retry)
-File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py, 
line 425, in _send
-  result = self._waiter.wait(msg_id, timeout)
-File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py, 
line 315, in wait
-  reply, ending = self._poll_connection(msg_id, timer)
-File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py, 
line 264, in _poll_connection
-  % msg_id)
-  MessagingTimeout: Timed out waiting for a reply to message ID 
aec640c6da0f4cf383b5100ba2441331
+  Traceback (most recent call last):
+    File /opt/stack/nova/nova/conductor/manager.py, line 640, in 
build_instances
+  request_spec