[Yahoo-eng-team] [Bug 1742385] Re: test_create_show_delete_security_group_rule_names failure on networking-midonet gate

2018-01-10 Thread YAMAMOTO Takashi
** Description changed:

  test_create_show_delete_security_group_rule_names is failing on
  networking-midonet gate jobs.
+ 
+ it seems like a neutron-lib version mismatch.
+ 1.11.0 is used by neutron.
+ on the other hand, 1.12.0 is used by tempest plugins installed via 
TEMPEST_PLUGINS,
+ which doesn't honor upper-constraints.
+ i guess tempest plugins should not use the list of protocols in neutron-lib 
because tempest is branchless and supposed to work against old servers (with 
older neutron-lib) and other implementations (like midonet) as well.
  
  note: "ipip" is added by neutron-lib 1.12.0
  (I18e5e42b687e12b64f5a9c523a912c8dd1afa9d2)
  
  eg. http://logs.openstack.org/91/531091/3/check/networking-midonet-
  tempest-multinode-ml2/1503639/logs/testr_results.html.gz
  
  Traceback (most recent call last):
    File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/neutron_tempest_plugin/api/test_security_groups.py",
 line 80, in test_create_show_delete_security_group_rule_names
  ethertype=self.ethertype)
    File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/neutron_tempest_plugin/api/base_security_groups.py",
 line 88, in _test_create_show_delete_security_group_rule
  self._create_security_group_rule(**kwargs)['security_group_rule'])
    File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/neutron_tempest_plugin/api/base_security_groups.py",
 line 58, in _create_security_group_rule
  rule_create_body = self.client.create_security_group_rule(**kwargs)
    File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/neutron_tempest_plugin/services/network/json/network_client.py",
 line 865, in create_security_group_rule
  resp, body = self.post(uri, body)
    File "tempest/lib/common/rest_client.py", line 279, in post
  return self.request('POST', url, extra_headers, headers, body, chunked)
    File "tempest/lib/common/rest_client.py", line 668, in request
  self._error_checker(resp, resp_body)
    File "tempest/lib/common/rest_client.py", line 779, in _error_checker
  raise exceptions.BadRequest(resp_body, resp=resp)
  tempest.lib.exceptions.BadRequest: Bad request
  Details: {u'type': u'SecurityGroupRuleInvalidProtocol', u'message': 
u"Security group rule protocol ipip not supported. Only protocol values [None, 
'ah', 'pgm', 'tcp', 'ipv6-encap', 'dccp', 'igmp', 'icmp', 'esp', 'ipv6-icmp', 
'vrrp', 'gre', 'sctp', 'rsvp', 'ipv6-route', 'udp', 'ipv6-opts', 'ipv6-nonxt', 
'udplite', 'egp', 'icmpv6', 'ipv6-frag', 'ospf'] and integer representations [0 
to 255] are supported.", u'detail': u''}

** Also affects: neutron
   Importance: Undecided
   Status: New

** Description changed:

  test_create_show_delete_security_group_rule_names is failing on
  networking-midonet gate jobs.
  
  it seems like a neutron-lib version mismatch.
  1.11.0 is used by neutron.
- on the other hand, 1.12.0 is used by tempest plugins installed via 
TEMPEST_PLUGINS,
- which doesn't honor upper-constraints.
- i guess tempest plugins should not use the list of protocols in neutron-lib 
because tempest is branchless and supposed to work against old servers (with 
older neutron-lib) and other implementations (like midonet) as well.
+ on the other hand, 1.12.0 is used by tempest plugins installed via 
TEMPEST_PLUGINS, which doesn't honor upper-constraints.
+ i guess tempest plugins should not use the list of protocols from neutron-lib 
because tempest is branchless and supposed to work against old servers (with 
older neutron-lib) and other implementations (like midonet) as well.
  
  note: "ipip" is added by neutron-lib 1.12.0
  (I18e5e42b687e12b64f5a9c523a912c8dd1afa9d2)
  
  eg. http://logs.openstack.org/91/531091/3/check/networking-midonet-
  tempest-multinode-ml2/1503639/logs/testr_results.html.gz
  
  Traceback (most recent call last):
    File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/neutron_tempest_plugin/api/test_security_groups.py",
 line 80, in test_create_show_delete_security_group_rule_names
  ethertype=self.ethertype)
    File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/neutron_tempest_plugin/api/base_security_groups.py",
 line 88, in _test_create_show_delete_security_group_rule
  self._create_security_group_rule(**kwargs)['security_group_rule'])
    File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/neutron_tempest_plugin/api/base_security_groups.py",
 line 58, in _create_security_group_rule
  rule_create_body = self.client.create_security_group_rule(**kwargs)
    File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/neutron_tempest_plugin/services/network/json/network_client.py",
 line 865, in create_security_group_rule
  resp, body = self.post(uri, body)
    File "tempest/lib/common/rest_client.py", line 279, in post
  return self.request('POST', url, extra_headers, heade

[Yahoo-eng-team] [Bug 1742401] [NEW] Fullstack tests neutron.tests.fullstack.test_securitygroup.TestSecurityGroupsSameNetwork fails often

2018-01-10 Thread Slawek Kaplonski
Public bug reported:

Fullstack tests from group
neutron.tests.fullstack.test_securitygroup.TestSecurityGroupsSameNetwork
are often failing in gate with error like:

ft1.1: 
neutron.tests.fullstack.test_securitygroup.TestSecurityGroupsSameNetwork.test_securitygroup(ovs-hybrid)_StringException:
 Traceback (most recent call last):
  File "neutron/tests/base.py", line 132, in func
return f(self, *args, **kwargs)
  File "neutron/tests/fullstack/test_securitygroup.py", line 193, in 
test_securitygroup
net_helpers.assert_no_ping(vms[0].namespace, vms[1].ip)
  File "neutron/tests/common/net_helpers.py", line 155, in assert_no_ping
{'ns': src_namespace, 'destination': dst_ip})
  File "neutron/tests/tools.py", line 144, in fail
raise unittest2.TestCase.failureException(msg)
AssertionError: destination ip 20.0.0.9 is replying to ping from namespace 
test-dbbb4045-363f-44cb-825b-17090f28df11, but it shouldn't

Example gate logs: http://logs.openstack.org/43/529143/3/check/neutron-
fullstack/d031a6b/logs/testr_results.html.gz

** Affects: neutron
 Importance: High
 Assignee: Slawek Kaplonski (slaweq)
 Status: Confirmed


** Tags: fullstack gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1742401

Title:
  Fullstack tests
  neutron.tests.fullstack.test_securitygroup.TestSecurityGroupsSameNetwork
  fails often

Status in neutron:
  Confirmed

Bug description:
  Fullstack tests from group
  neutron.tests.fullstack.test_securitygroup.TestSecurityGroupsSameNetwork
  are often failing in gate with error like:

  ft1.1: 
neutron.tests.fullstack.test_securitygroup.TestSecurityGroupsSameNetwork.test_securitygroup(ovs-hybrid)_StringException:
 Traceback (most recent call last):
File "neutron/tests/base.py", line 132, in func
  return f(self, *args, **kwargs)
File "neutron/tests/fullstack/test_securitygroup.py", line 193, in 
test_securitygroup
  net_helpers.assert_no_ping(vms[0].namespace, vms[1].ip)
File "neutron/tests/common/net_helpers.py", line 155, in assert_no_ping
  {'ns': src_namespace, 'destination': dst_ip})
File "neutron/tests/tools.py", line 144, in fail
  raise unittest2.TestCase.failureException(msg)
  AssertionError: destination ip 20.0.0.9 is replying to ping from namespace 
test-dbbb4045-363f-44cb-825b-17090f28df11, but it shouldn't

  Example gate logs: http://logs.openstack.org/43/529143/3/check
  /neutron-fullstack/d031a6b/logs/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1742401/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1742412] [NEW] Unicode handling error in transfer credentials

2018-01-10 Thread Radomir Dopieralski
Public bug reported:

Unicode decode error when downloading the volume transfer authorization 
information
Steps to Reproduce:
1. Switch the language to Japanese in dashboard
2. Create transfer in volume
3. Download the authorization information

Actual results:

The following error will be output in horizon.log and the authorization
information can not be downloaded

UnicodeDecodeError: 'ascii' codec can't decode byte 0xe8 in position 0:
ordinal not in range(128)

** Affects: horizon
 Importance: Undecided
 Assignee: Radomir Dopieralski (deshipu)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Radomir Dopieralski (deshipu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1742412

Title:
  Unicode handling error in transfer credentials

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Unicode decode error when downloading the volume transfer authorization 
information
  Steps to Reproduce:
  1. Switch the language to Japanese in dashboard
  2. Create transfer in volume
  3. Download the authorization information

  Actual results:

  The following error will be output in horizon.log and the
  authorization information can not be downloaded

  UnicodeDecodeError: 'ascii' codec can't decode byte 0xe8 in position
  0: ordinal not in range(128)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1742412/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1723928] Re: In case of volume_use_multipath=True, Nova unable to fetch CONF.libvirt.volume_use_multipath value from nova.conf

2018-01-10 Thread Lee Yarwood
The config-ref makes it clear that this needs to be configured on the
individual compute hosts:

https://docs.openstack.org/nova/pike/configuration/config.html#libvirt
https://docs.openstack.org/nova/pike/configuration/config.html#libvirt.volume_use_multipath

Further as this is devstack you should be using local.conf to set this
during the initial stack.sh run:

https://docs.openstack.org/devstack/latest/configuration.html#id3
http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/nova#n51

Marking the bug as invalid given c#29

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1723928

Title:
  In case of volume_use_multipath=True, Nova unable to fetch
  CONF.libvirt.volume_use_multipath value from nova.conf

Status in Cinder:
  Invalid
Status in OpenStack Compute (nova):
  Invalid
Status in os-brick:
  New

Bug description:
  Issue :-
  --
  when we place 'volume_use_multipath=True' in nova.conf. while attaching the 
volume to an instance, 'connector' dictionary passed to cinder's 
initialize_connection() has multipath=False (i.e connector['multipath']=False)

  Expected :-
  --
  This should be connector['multipath']=True since i have place 
'volume_use_multipath=True'

  connector
  {'wwpns': [u'1000d4c9ef76a1d1', u'1000d4c9ef76a1d5'], 'wwnns': 
[u'2000d4c9ef76a1d1', u'2000d4c9ef76a1d5'], 'ip': '10.50.0.155', 'initiator': 
u'iqn.1993-08.org.debian:01:db6bf10a0db', 'platform': 'x86_64', 'host': 
'cld6b11', 'do_local_attach': False, 'os_type': 'linux2', 'multipath': False}

  
  Steps to reproduce :-
  
  1) Place volume_use_multipath=True in nova.conf libvirt section
  [libvirt]
  live_migration_uri = qemu+ssh://stack@%s/system
  cpu_mode = none
  virt_type = kvm
  volume_use_multipath = True

  2) Create a lvm volume
  3) Create a instance and try to attach.

  Note :- 
  -
  This multipath functionality worked fine in Ocata. but from pike and current 
(queens) release this is not working as expected.

  connector dictionary in ocata release :-
  connector
  {u'wwpns': [u'100038eaa73005a1', u'100038eaa73005a5'], u'wwnns': 
[u'200038eaa73005a1', u'200038eaa73005a5'], u'ip': u'10.50.128.110', 
u'initiator': u'iqn.1993-08.org.debian:01:d7f1c5d25e0', u'platform': u'x86_64', 
u'host': u'cld6b10', u'do_local_attach': False, u'os_type': u'linux2', 
u'multipath': True}

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1723928/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1742421] [NEW] Cells Layout (v2) in nova doc misleading about upcalls

2018-01-10 Thread Liam Young
Public bug reported:


- [X] This doc is inaccurate in this way: Documentation suggests nova v2 cells 
do not make 'upcalls' but they do when talking to the placement api.
- [ ] This is a doc addition request.
- [ ] I have a fix to the document that I can paste below including example: 
input and output. 


It is important to note that services in the lower cell boxes
only have the ability to call back to the placement API and no other
API-layer services via RPC, nor do they have access to the API database
for global visibility of resources across the cloud. This is intentional
and provides security and failure domain isolation benefits, but also has 
impacts on somethings that would otherwise require this any-to-any 
communication style. Check the release notes for the version of Nova you 
are using for the most up-to-date information about any caveats that may be
present due to this limitation.


---
Release: 17.0.0.0b3.dev323 on 2018-01-09 21:52
SHA: 90a92d33edaea2b7411a5fd528f3159a486e1fd0
Source: 
https://git.openstack.org/cgit/openstack/nova/tree/doc/source/user/cellsv2-layout.rst
URL: https://docs.openstack.org/nova/latest/user/cellsv2-layout.html

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1742421

Title:
  Cells Layout (v2) in nova doc misleading about upcalls

Status in OpenStack Compute (nova):
  New

Bug description:
  
  - [X] This doc is inaccurate in this way: Documentation suggests nova v2 
cells do not make 'upcalls' but they do when talking to the placement api.
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  
  It is important to note that services in the lower cell boxes
  only have the ability to call back to the placement API and no other
  API-layer services via RPC, nor do they have access to the API database
  for global visibility of resources across the cloud. This is intentional
  and provides security and failure domain isolation benefits, but also has 
  impacts on somethings that would otherwise require this any-to-any 
  communication style. Check the release notes for the version of Nova you 
  are using for the most up-to-date information about any caveats that may be
  present due to this limitation.

  
  ---
  Release: 17.0.0.0b3.dev323 on 2018-01-09 21:52
  SHA: 90a92d33edaea2b7411a5fd528f3159a486e1fd0
  Source: 
https://git.openstack.org/cgit/openstack/nova/tree/doc/source/user/cellsv2-layout.rst
  URL: https://docs.openstack.org/nova/latest/user/cellsv2-layout.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1742421/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1723928] Re: In case of volume_use_multipath=True, Nova unable to fetch CONF.libvirt.volume_use_multipath value from nova.conf

2018-01-10 Thread Sean McGinnis
** Changed in: os-brick
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1723928

Title:
  In case of volume_use_multipath=True, Nova unable to fetch
  CONF.libvirt.volume_use_multipath value from nova.conf

Status in Cinder:
  Invalid
Status in OpenStack Compute (nova):
  Invalid
Status in os-brick:
  Invalid

Bug description:
  Issue :-
  --
  when we place 'volume_use_multipath=True' in nova.conf. while attaching the 
volume to an instance, 'connector' dictionary passed to cinder's 
initialize_connection() has multipath=False (i.e connector['multipath']=False)

  Expected :-
  --
  This should be connector['multipath']=True since i have place 
'volume_use_multipath=True'

  connector
  {'wwpns': [u'1000d4c9ef76a1d1', u'1000d4c9ef76a1d5'], 'wwnns': 
[u'2000d4c9ef76a1d1', u'2000d4c9ef76a1d5'], 'ip': '10.50.0.155', 'initiator': 
u'iqn.1993-08.org.debian:01:db6bf10a0db', 'platform': 'x86_64', 'host': 
'cld6b11', 'do_local_attach': False, 'os_type': 'linux2', 'multipath': False}

  
  Steps to reproduce :-
  
  1) Place volume_use_multipath=True in nova.conf libvirt section
  [libvirt]
  live_migration_uri = qemu+ssh://stack@%s/system
  cpu_mode = none
  virt_type = kvm
  volume_use_multipath = True

  2) Create a lvm volume
  3) Create a instance and try to attach.

  Note :- 
  -
  This multipath functionality worked fine in Ocata. but from pike and current 
(queens) release this is not working as expected.

  connector dictionary in ocata release :-
  connector
  {u'wwpns': [u'100038eaa73005a1', u'100038eaa73005a5'], u'wwnns': 
[u'200038eaa73005a1', u'200038eaa73005a5'], u'ip': u'10.50.128.110', 
u'initiator': u'iqn.1993-08.org.debian:01:d7f1c5d25e0', u'platform': u'x86_64', 
u'host': u'cld6b10', u'do_local_attach': False, u'os_type': u'linux2', 
u'multipath': True}

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1723928/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1742450] [NEW] configuration of (FWaaS) v2 in file l3_agent.ini

2018-01-10 Thread miaoyuliang
Public bug reported:

in the page of
https://docs.openstack.org/neutron/pike/admin/fwaas-v2-scenario.html,the
current configuration of file l3_agent.ini will cause some error in
l3_agent.log.

Version:pike
Linux version 3.10.0-693.11.6.el7.x86_64 (buil...@kbuilder.dev.centos.org) (gcc 
version 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC) ) #1 SMP Thu Jan 4 01:06:37 UTC 
2018

following is the detail and the fix way:

WRONG:
 [AGENT]
 extensions = fwaas

RIGHT:
 [AGENT]
 extensions = fwaas_v2

if it was configured to "fwaas" in Firewall-as-a-Service (FWaaS) v2
scenario, there will be error in l3_agent.log :

2018-01-10 20:28:30.437 1322 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent 
[req-96d0b44f-bd90-4bee-b17d-9ea3fde0c013 - 348df301bd2a4bb9b643d410bfaf1884 - 
- -] FWaaS RPC info call failed for '8bc18231-5122-4707-9141-33e36e943823'.: 
RemoteError: Remote error: NoSuchMethod Endpoint does not support RPC method 
get_firewalls_for_tenant
[u'Traceback (most recent call last):\n', u'  File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 160, in 
_process_incoming\nres = self.dispatcher.dispatch(message)\n', u'  File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 218, 
in dispatch\nraise NoSuchMethod(method)\n', u'NoSuchMethod: Endpoint does 
not support RPC method get_firewalls_for_tenant\n'].
2018-01-10 20:28:30.437 1322 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent Traceback 
(most recent call last):
2018-01-10 20:28:30.437 1322 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent   File 
"/usr/lib/python2.7/site-packages/neutron_fwaas/services/firewall/agents/l3reference/firewall_l3_agent.py",
 line 220, in add_router
2018-01-10 20:28:30.437 1322 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent 
self._process_router_add(new_router)
2018-01-10 20:28:30.437 1322 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent   File 
"/usr/lib/python2.7/site-packages/neutron_fwaas/services/firewall/agents/l3reference/firewall_l3_agent.py",
 line 195, in _process_router_add
2018-01-10 20:28:30.437 1322 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent 
fw_list = self.fwplugin_rpc.get_firewalls_for_tenant(ctx)
2018-01-10 20:28:30.437 1322 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent   File 
"/usr/lib/python2.7/site-packages/neutron_fwaas/services/firewall/agents/l3reference/firewall_l3_agent.py",
 line 46, in get_firewalls_for_tenant
2018-01-10 20:28:30.437 1322 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent return 
cctxt.call(context, 'get_firewalls_for_tenant', host=self.host)
2018-01-10 20:28:30.437 1322 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent   File 
"/usr/lib/python2.7/site-packages/neutron/common/rpc.py", line 162, in call
2018-01-10 20:28:30.437 1322 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent return 
self._original_context.call(ctxt, method, **kwargs)
2018-01-10 20:28:30.437 1322 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 169, in 
call
2018-01-10 20:28:30.437 1322 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent 
retry=self.retry)
2018-01-10 20:28:30.437 1322 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 123, in 
_send
2018-01-10 20:28:30.437 1322 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent 
timeout=timeout, retry=retry)
2018-01-10 20:28:30.437 1322 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
566, in send
2018-01-10 20:28:30.437 1322 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent 
retry=retry)
2018-01-10 20:28:30.437 1322 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
557, in _send
2018-01-10 20:28:30.437 1322 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent raise 
result
2018-01-10 20:28:30.437 1322 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent 
RemoteError: Remote error: NoSuchMethod Endpoint does not support RPC method 
get_firewalls_for_tenant
2018-01-10 20:28:30.437 1322 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent 
[u'Traceback (most recent call last):\n', u'  File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 160, in 
_process_incoming\nres = self.dispatcher.dispatch(message)\n

[Yahoo-eng-team] [Bug 1742467] [NEW] Compute unnecessarily gets resource provider aggregates during every update_available_resource run

2018-01-10 Thread Matt Riedemann
Public bug reported:

This was provided by Kris Lindgren from GoDaddy on his Pike deployment
that is now running with Placement.

He noted that for every update_available_resource periodic task run,
these are the calls to Placement if inventory didn't change:

https://paste.ubuntu.com/26356656/

So there are 5 GET requests in there.

In this run, there isn't a call to get the resource provider itself
because the SchedulerReportClient has it cached in the
_resource_providers dict.

But it still gets aggregates for the provider twice because it always
wants to be up to date.

The aggregates are put in the _provider_aggregate_map, however, that
code is never used by anything since nova doesn't yet support resource
provider aggregates since those are needed for shared resource
providers, like a shared storage pool.

Until nova supports shared providers, we likely should just comment the
_provider_aggregate_map code out if nothing is using it to avoid the
extra HTTP requests to Placement every minute (the default periodic task
interval).

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: compute placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1742467

Title:
  Compute unnecessarily gets resource provider aggregates during every
  update_available_resource run

Status in OpenStack Compute (nova):
  New

Bug description:
  This was provided by Kris Lindgren from GoDaddy on his Pike deployment
  that is now running with Placement.

  He noted that for every update_available_resource periodic task run,
  these are the calls to Placement if inventory didn't change:

  https://paste.ubuntu.com/26356656/

  So there are 5 GET requests in there.

  In this run, there isn't a call to get the resource provider itself
  because the SchedulerReportClient has it cached in the
  _resource_providers dict.

  But it still gets aggregates for the provider twice because it always
  wants to be up to date.

  The aggregates are put in the _provider_aggregate_map, however, that
  code is never used by anything since nova doesn't yet support resource
  provider aggregates since those are needed for shared resource
  providers, like a shared storage pool.

  Until nova supports shared providers, we likely should just comment
  the _provider_aggregate_map code out if nothing is using it to avoid
  the extra HTTP requests to Placement every minute (the default
  periodic task interval).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1742467/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1742467] Re: Compute unnecessarily gets resource provider aggregates during every update_available_resource run

2018-01-10 Thread Matt Riedemann
** Changed in: nova
   Status: New => Triaged

** Changed in: nova
   Importance: Undecided => Medium

** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Tags added: performance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1742467

Title:
  Compute unnecessarily gets resource provider aggregates during every
  update_available_resource run

Status in OpenStack Compute (nova):
  Triaged
Status in OpenStack Compute (nova) ocata series:
  New
Status in OpenStack Compute (nova) pike series:
  New

Bug description:
  This was provided by Kris Lindgren from GoDaddy on his Pike deployment
  that is now running with Placement.

  He noted that for every update_available_resource periodic task run,
  these are the calls to Placement if inventory didn't change:

  https://paste.ubuntu.com/26356656/

  So there are 5 GET requests in there.

  In this run, there isn't a call to get the resource provider itself
  because the SchedulerReportClient has it cached in the
  _resource_providers dict.

  But it still gets aggregates for the provider twice because it always
  wants to be up to date.

  The aggregates are put in the _provider_aggregate_map, however, that
  code is never used by anything since nova doesn't yet support resource
  provider aggregates since those are needed for shared resource
  providers, like a shared storage pool.

  Until nova supports shared providers, we likely should just comment
  the _provider_aggregate_map code out if nothing is using it to avoid
  the extra HTTP requests to Placement every minute (the default
  periodic task interval).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1742467/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1605098] Re: Nova usage not showing server real uptime

2018-01-10 Thread Chris Friesen
Nova reserves resources for the instance even if it's not running, so
the reported uptime probably shouldn't be used for billing.

Also, the uptime gets reset on a resize/revert-resize/rescue, further
making it tricky to use for billing.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1605098

Title:
  Nova usage not showing server real uptime

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Hi All,

  I am trying to calculate openstack server "uptime" where nova os usage
  is giving server creation time, which cant take forward for billing,
  Is there any way to do ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1605098/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1555415] Re: Session timeout from AngularJS pages doesn't give user feedback

2018-01-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/459908
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=2415d5ea59a465881e10ec5a02da5d52b64d8a58
Submitter: Zuul
Branch:master

commit 2415d5ea59a465881e10ec5a02da5d52b64d8a58
Author: gugl 
Date:   Tue Apr 25 18:14:57 2017 -0700

Added error msg when gets redirect to login page

This checkin includes the followings:

1.Added an error toast message when user gets
unauthorized 401 or 404 error during operation.

2.When the modal dialog shows up, it also adds an error
message at the top of the dialog.

3.Also added some unit tests to let the coverage passes
the threshold.

Change-Id: I5e0962937932a21565d374561f09f98013063a4f
Closes-bug: #1555415


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1555415

Title:
  Session timeout from AngularJS pages doesn't give user feedback

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  At the moment we have very high-level code in the config() of
  framework.module.js which redirects the user to the login page if an
  API action is unauthorised (session timeout). There's no visible
  feedback to the user at this point though.

  I think we need to move the addition of that handler to a run() so
  that the horizon.framework.widgets.toast.service is available and we
  can throw up a toast telling the user that they've been logged out. We
  can't include it in the config() because at that time the toast
  service hasn't been instantiated.

  You can see this in action by loading up the swift UI and forcing a
  session invalidation, then clicking on some action like viewing a
  container contents.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1555415/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1742494] [NEW] network reporters generate WARNING in local stage

2018-01-10 Thread Scott Moser
Public bug reported:

Any of the network based reporters (MAAS) that are configured in system
config (/etc/cloud/cloud.cfg*) will attempt to be used during the local
phase of cloud-init (cloud-init-local.service).

Unsurprisingly, these will fail.

Unfortunately, they log WARNING to the logs like:
2018-01-10 15:41:20,842 - handlers.py[WARNING]: failed posting event: start: 
init-local/check-cache: attempting to read from cache [check]


This is clearly not a WARNING that should be paid attention to.

It'd be good to only log debug or in some other way handle this better.

The end result is just false positives for WARNING in /var/log/cloud-
init.log on nodes deployed by maas.

** Affects: cloud-init
 Importance: Medium
 Status: Confirmed

** Changed in: cloud-init
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1742494

Title:
  network reporters generate WARNING in local stage

Status in cloud-init:
  Confirmed

Bug description:
  Any of the network based reporters (MAAS) that are configured in
  system config (/etc/cloud/cloud.cfg*) will attempt to be used during
  the local phase of cloud-init (cloud-init-local.service).

  Unsurprisingly, these will fail.

  Unfortunately, they log WARNING to the logs like:
  2018-01-10 15:41:20,842 - handlers.py[WARNING]: failed posting event: start: 
init-local/check-cache: attempting to read from cache [check]

  
  This is clearly not a WARNING that should be paid attention to.

  It'd be good to only log debug or in some other way handle this
  better.

  The end result is just false positives for WARNING in /var/log/cloud-
  init.log on nodes deployed by maas.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1742494/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1742505] [NEW] gre_sys set to default 1472 when using path_mtu > 1500 with ovs 2.8.x

2018-01-10 Thread David Ames
Public bug reported:

Setup:
Pike neutron 11.0.2-0ubuntu1.1~cloud0
OVS 2.8.0
Jumbo frames setttings per: 
https://docs.openstack.org/mitaka/networking-guide/config-mtu.html
global_physnet_mtu = 9000
path_mtu = 9000

Symptoms:
gre_sys MTU is 1472
Instances with MTUs > 1500 fail to communicate across GRE

Temporary Workaround:
ifconfig gre_sys MTU 9000
Note: When ovs rebuilds tunnels, such as on a restart, gre_sys MTU is set back 
to default 1472.

Note: downgrading from OVS 2.8.0 to 2.6.1 resolves the issue.

Previous behavior:
With Ocata or Pike and OVS 2.6.x
gre_sys MTU defaults to 65490
It remains at 65490 through restarts.

This may be related to some combination of the following changes in OVS which 
seem to imply MTUs must be set in the ovs database for tunnel interfaces and 
patches:
https://github.com/openvswitch/ovs/commit/8c319e8b73032e06c7dd1832b3b31f8a1189dcd1
https://github.com/openvswitch/ovs/commit/3a414a0a4f1901ba015ec80b917b9fb206f3c74f
https://github.com/openvswitch/ovs/blob/6355db7f447c8e83efbd4971cca9265f5e0c8531/datapath/vport-internal_dev.c#L186

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: serverstack uosci upgrade

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1742505

Title:
  gre_sys set to default 1472 when using path_mtu > 1500 with ovs 2.8.x

Status in neutron:
  New

Bug description:
  Setup:
  Pike neutron 11.0.2-0ubuntu1.1~cloud0
  OVS 2.8.0
  Jumbo frames setttings per: 
https://docs.openstack.org/mitaka/networking-guide/config-mtu.html
  global_physnet_mtu = 9000
  path_mtu = 9000

  Symptoms:
  gre_sys MTU is 1472
  Instances with MTUs > 1500 fail to communicate across GRE

  Temporary Workaround:
  ifconfig gre_sys MTU 9000
  Note: When ovs rebuilds tunnels, such as on a restart, gre_sys MTU is set 
back to default 1472.

  Note: downgrading from OVS 2.8.0 to 2.6.1 resolves the issue.

  Previous behavior:
  With Ocata or Pike and OVS 2.6.x
  gre_sys MTU defaults to 65490
  It remains at 65490 through restarts.

  This may be related to some combination of the following changes in OVS which 
seem to imply MTUs must be set in the ovs database for tunnel interfaces and 
patches:
  
https://github.com/openvswitch/ovs/commit/8c319e8b73032e06c7dd1832b3b31f8a1189dcd1
  
https://github.com/openvswitch/ovs/commit/3a414a0a4f1901ba015ec80b917b9fb206f3c74f
  
https://github.com/openvswitch/ovs/blob/6355db7f447c8e83efbd4971cca9265f5e0c8531/datapath/vport-internal_dev.c#L186

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1742505/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1742508] [NEW] Deleted instances are still shown in openstack server list and Horizon

2018-01-10 Thread David Manchado
Public bug reported:

Description
===
Deleted instances are still being listed when using openstack server list or in 
Horizon instances dashboard.
Any operation on the instance (delete, show) will fail with 'No server with a 
name or ID'
We have more than 70 instances in this situation.
The instance is no longer present in nova.instances and it is only referenced 
in 
nova.instance_id_mappings1 time
nova_api.request_specs   1 time
nova_cell0.instance_extra1 time
nova_cell0.instance_faults   1 time
nova_cell0.instance_id_mappings  1 time
nova_cell0.instance_info_caches  1 time
nova_cell0.instance_metadata 1 time
nova_cell0.instance_system_metadata  1 time
nova_cell0.instances 1 time

Steps to reproduce
==
Not sure.
Most of the issues might be related to operation while upgrading the overcloud 
and led instances to inconsistent status or due to database or 
migration/upgrade tasks.


Expected result
===
Deleted instances to be gone


Actual result
=
Deleted instance is still being listed/shown. No operation can be done.
$ openstack server list --all --format json | grep 
f91980fb-40e9-4f64-a90e-8701575edac1
"ID": "f91980fb-40e9-4f64-a90e-8701575edac1", 

$ openstack server show f91980fb-40e9-4f64-a90e-8701575edac1
No server with a name or ID of 'f91980fb-40e9-4f64-a90e-8701575edac1' exists.


Environment
===
This is a TripleO / CentOS / RDO setup.
1. Exact version of OpenStack you are running.
openstack-nova-scheduler-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
python2-novaclient-7.1.2-1.el7.noarch
openstack-nova-novncproxy-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
openstack-nova-cert-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
openstack-nova-console-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
openstack-nova-conductor-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
openstack-nova-common-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
openstack-nova-compute-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
openstack-nova-placement-api-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
puppet-nova-10.4.2-0.2018010220.f4bc1f0.el7.centos.noarch
openstack-nova-api-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
python-nova-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch

2. Which hypervisor did you use?
Libvirt + KVM
libvirt-daemon-driver-qemu-3.2.0-14.el7_4.7.x86_64
libvirt-daemon-kvm-3.2.0-14.el7_4.7.x86_64
ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch
centos-release-qemu-ev-1.0-2.el7.noarch
qemu-kvm-ev-2.9.0-16.el7_4.13.1.x86_64
qemu-kvm-common-ev-2.9.0-16.el7_4.13.1.x86_64
qemu-img-ev-2.9.0-16.el7_4.13.1.x86_64


2. Which storage type did you use?
Ceph 10.2.7

3. Which networking type did you use?
Neutron Openvswitch

Logs & Configs
==

nova-api.log:2018-01-10 12:34:52.252 6799 INFO nova.api.openstack.wsgi 
[req-3a17767a-6f3b-43c9-b6bc-4672fc19a9c3 1ded2d1b92794bf5b362d76fa2fcee69 
8dbc460fe5fd4fcab0096c2c0aad3ece - default default] HTTP exception thrown: 
Instance f91980fb-40e9-4f64-a90e-8701575edac1 could not be found.
nova-api.log:2018-01-10 12:34:52.253 6799 INFO nova.osapi_compute.wsgi.server 
[req-3a17767a-6f3b-43c9-b6bc-4672fc19a9c3 1ded2d1b92794bf5b362d76fa2fcee69 
8dbc460fe5fd4fcab0096c2c0aad3ece - default default] XXX.XXX.XXX.XXX "GET 
/v2.1/servers/f91980fb-40e9-4f64-a90e-8701575edac1 HTTP/1.1" status: 404 len: 
442 time: 0.2764058
nova-api.log:2018-01-10 12:35:01.367 6803 INFO nova.api.openstack.wsgi 
[req-c5c7d5cd-4b9c-416a-83c8-069e2c92f188 1ded2d1b92794bf5b362d76fa2fcee69 
8dbc460fe5fd4fcab0096c2c0aad3ece - default default] HTTP exception thrown: 
Instance f91980fb-40e9-4f64-a90e-8701575edac1 could not be found.
nova-api.log:2018-01-10 12:35:01.368 6803 INFO nova.osapi_compute.wsgi.server 
[req-c5c7d5cd-4b9c-416a-83c8-069e2c92f188 1ded2d1b92794bf5b362d76fa2fcee69 
8dbc460fe5fd4fcab0096c2c0aad3ece - default default] XXX.XXX.XXX.XXX "GET 
/v2.1/servers/f91980fb-40e9-4f64-a90e-8701575edac1 HTTP/1.1" status: 404 len: 
442 time: 0.3496580
nova-api.log:2018-01-10 12:35:01.626 6803 INFO nova.osapi_compute.wsgi.server 
[req-39dd7466-584f-41df-bc8a-569a4f9851f5 1ded2d1b92794bf5b362d76fa2fcee69 
8dbc460fe5fd4fcab0096c2c0aad3ece - default default] XXX.XXX.XXX.XXX "GET 
/v2.1/servers?name=f91980fb-40e9-4f64-a90e-8701575edac1 HTTP/1.1" status: 200 
len: 323 time: 0.0913641
nova-api.log:2018-01-10 13:56:45.206 6810 INFO nova.api.openstack.wsgi 
[req-0667f674-653d-43f1-9712-fa8c5b20343c 1ded2d1b92794bf5b362d76fa2fcee69 
8dbc460fe5fd4fcab0096c2c0aad3ece - default default] HTTP exception thrown: 
Instance f91980fb-40e9-4f64-a90e-8701575edac1 could not be found.
nova-api.log:2018-01-10 13:56:45.210 6810 INFO nova.osapi_compute.wsgi.server 
[req-0667f674-653d-43f1-9712-fa8c5b20343c 1ded2d1b92794bf5b362d76fa2fcee69 
8dbc460fe5fd4fcab0096c2c0aad3ece - default default] XXX.XXX.XXX.XXX "GET 
/v2.1/servers/f91980fb-40e9-4f64-a90e

[Yahoo-eng-team] [Bug 1582585] Re: the speed of query user from ldap server is very slow

2018-01-10 Thread Corey Bryant
** Also affects: cloud-archive/mitaka
   Importance: Undecided
   Status: New

** Also affects: keystone (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/mitaka
   Status: New => Triaged

** Changed in: keystone (Ubuntu Xenial)
   Status: New => Triaged

** Changed in: cloud-archive/mitaka
   Importance: Undecided => High

** Changed in: keystone (Ubuntu Xenial)
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1582585

Title:
  the speed of query user from ldap server is very slow

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive mitaka series:
  Triaged
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystone package in Ubuntu:
  New
Status in keystone source package in Xenial:
  Triaged

Bug description:
  In our project, the speed of query user from ldap server is very
  slow,our ldap user number is 12,000,the query costs almost 45 seconds

  The reason is that keystone will generate the uuid for the ldap users one by 
one and insert db.And second query time later,it also goes to db,not use the 
cache.
  So adding the cache to improve the query speed

  After adding @MEMOIZE to the following function
  
https://github.com/openstack/keystone/blob/master/keystone/identity/core.py#L580.
  First query time almost costs 50 seconds,but second query time later it only 
costs 7 seconds.

  So it is very necessary to improve this feature

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1582585/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582585] Re: the speed of query user from ldap server is very slow

2018-01-10 Thread Corey Bryant
** Also affects: cloud-archive/newton
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/newton
   Status: New => Triaged

** Changed in: cloud-archive/newton
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1582585

Title:
  the speed of query user from ldap server is very slow

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive mitaka series:
  Triaged
Status in Ubuntu Cloud Archive newton series:
  Triaged
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystone package in Ubuntu:
  New
Status in keystone source package in Xenial:
  Triaged

Bug description:
  In our project, the speed of query user from ldap server is very
  slow,our ldap user number is 12,000,the query costs almost 45 seconds

  The reason is that keystone will generate the uuid for the ldap users one by 
one and insert db.And second query time later,it also goes to db,not use the 
cache.
  So adding the cache to improve the query speed

  After adding @MEMOIZE to the following function
  
https://github.com/openstack/keystone/blob/master/keystone/identity/core.py#L580.
  First query time almost costs 50 seconds,but second query time later it only 
costs 7 seconds.

  So it is very necessary to improve this feature

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1582585/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1692397] Re: hypervisor statistics could be incorrect

2018-01-10 Thread Corey Bryant
This bug was fixed in the package nova - 2:13.1.4-0ubuntu4.2~cloud0
---

 nova (2:13.1.4-0ubuntu4.2~cloud0) trusty-mitaka; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 nova (2:13.1.4-0ubuntu4.2) xenial; urgency=medium
 .
   [ Seyeong Kim ]
   * Add supporting http_proxy_to_wsgi to api-paste.ini (LP: #1573766)
 - d/p/0001-Add-http_proxy_to_wsgi-to-api-paste.patch
 - d/p/0002-Add-proxy-middleware-to-application-pipeline.patch
 .
   [ Edward Hope-Morley ]
   * Patch nova.db.sqlalchemy.api.compute_node_statistics() to
 exclude deleted services from stats count. This is the same
 fix as that backported to newton in bug 1692397 except that
 the actual patch is not backportable due to the underlying
 code changing extensively.
 - d/p/exlude-deleted-service-from-stats-count.patch (LP: #1692397)


** Changed in: cloud-archive/mitaka
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1692397

Title:
  hypervisor statistics could be incorrect

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  Fix Committed
Status in OpenStack Compute (nova) ocata series:
  Fix Committed
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Xenial:
  Fix Released

Bug description:
  [Impact]

  If you deploy a nova-compute service to a node, delete that service
  (via the api), then deploy a new nova-compute service to that same
  node i.e. same hostname, the database will now have two service
  records one marked as deleted and the other not. So far so good until
  you do an 'openstack hypervisor stats show' at which point the api
  will aggregate the resource counts from both services. This has been
  fixed and backported all the way down to Newton so the problem still
  exists on Mitaka. I assume the reason why the patch was not backported
  to Mitaka is that the code in
  nova.db.sqlalchemy.apy.compute_node_statistics() changed quite a bit.
  However it only requires a one line change in the old code (that does
  the same thing as the new code) to fix this issue.

  [Test Case]

   * Deploy Mitaka with bundle http://pastebin.ubuntu.com/25968008/

   * Do 'openstack hypervisor stats show' and verify that count is 3

   * Do 'juju remove-unit nova-compute/2' to delete a compute service
  but not its physical host

   * Do 'openstack compute service delete ' to delete a compute
  service we just removed (choosing correct id)

   * Do 'openstack hypervisor stats show' and verify that count is 2

   * Do juju add-unit nova-compute --to 

   * Do 'openstack hypervisor stats show' and verify that count is 3
  (not 4 as it would be before fix)

  [Regression Potential]

  None anticipated other than for clients that were interpreting invalid
  counts as correct.

  [Other Info]
   
  ===

  Hypervisor statistics could be incorrect:

  When we killed a nova-compute service and deleted the service from nova DB, 
and then
  start the nova-compute service again, the result of Hypervisor/statistics API 
(nova hypervisor-stats) will be
  incorrect;

  How to reproduce:

  Step1. Check the correct statistics before we do anything:
  root@SZX1000291919:/opt/stack/nova# nova  hypervisor-stats
  +--+---+
  | Property | Value |
  +--+---+
  | count| 1 |
  | current_workload | 0 |
  | disk_available_least | 14|
  | free_disk_gb | 34|
  | free_ram_mb  | 6936  |
  | local_gb | 35|
  | local_gb_used| 1 |
  | memory_mb| 7960  |
  | memory_mb_used   | 1024  |
  | running_vms  | 1 |
  | vcpus| 8 |
  | vcpus_used   | 1 |
  +--+---+

  Step2. Kill the compute service:
  root@SZX1000291919:/var/log/nova# ps -ef | grep nova-com
  root 120419 120411  0 11:06 pts/27   00:00:00 sg libvirtd 
/usr/local/bin/nova-compute --config-file /etc/nova/nova.conf --log-file 
/var/log/nova/nova-compute.log
  root 120420 120419  0 11:06 pts/27   00:00:07 /usr/bin/python 
/usr/local/bin/nova-compute --config-file /etc/nova/nova.conf --log-file 
/var/log/nova/nova-compute.log

  root@SZX1000291919:/var/log/nova# kill -9 120419
  root@SZX1000291919:/var/log/nova# /usr/local/bin/stack: line 19: 120419 
Killed  sg libvirtd '/usr/local/bin/nova-compute --config-file 
/etc/nova/nova.conf --log-file /var/log/nova/nova-compute.log' > /dev/null 2>&1

  root@SZX1000291919:/var/log/nova# nova service-list
  
++--+---+--+---

[Yahoo-eng-team] [Bug 1573766] Re: Enable the paste filter HTTPProxyToWSGI by default

2018-01-10 Thread Corey Bryant
This bug was fixed in the package nova - 2:13.1.4-0ubuntu4.2~cloud0
---

 nova (2:13.1.4-0ubuntu4.2~cloud0) trusty-mitaka; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 nova (2:13.1.4-0ubuntu4.2) xenial; urgency=medium
 .
   [ Seyeong Kim ]
   * Add supporting http_proxy_to_wsgi to api-paste.ini (LP: #1573766)
 - d/p/0001-Add-http_proxy_to_wsgi-to-api-paste.patch
 - d/p/0002-Add-proxy-middleware-to-application-pipeline.patch
 .
   [ Edward Hope-Morley ]
   * Patch nova.db.sqlalchemy.api.compute_node_statistics() to
 exclude deleted services from stats count. This is the same
 fix as that backported to newton in bug 1692397 except that
 the actual patch is not backportable due to the underlying
 code changing extensively.
 - d/p/exlude-deleted-service-from-stats-count.patch (LP: #1692397)


** Changed in: cloud-archive/mitaka
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1573766

Title:
  Enable the paste filter HTTPProxyToWSGI by default

Status in OpenStack nova-cloud-controller charm:
  In Progress
Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in nova package in Ubuntu:
  Invalid
Status in nova source package in Xenial:
  Fix Released

Bug description:
  [Impact]

  Getting http link instead of https even if https setting is set.

  [Test case]

  1. deploy openstack ( with keystone charm option use-https, 
https-service-endpoints)
  2. create instance
  3. nova --debug list
 - check the result if https links are there.

  [Regression Potential]

  nova pkg will be affected by this patch. However, this patch modifies
  only api-paste.ini by adding http_proxy_to_wsgi. To accept this patch,
  nova service need to be restarted. Tested no vms are affected this
  patch, but APIs or daemons are temporarily.

  
  [Others]

  related commits ( which are already in comments )

  
https://git.openstack.org/cgit/openstack/nova/commit/?id=b609a3b32ee8e68cef7e66fabff07ca8ad6d4649
  
https://git.openstack.org/cgit/openstack/nova/commit/?id=6051f30a7e61c32833667d3079744b2d4fd1ce7c

  
  [Original Description]

  oslo middleware provides a paste filter that sets the correct proxy
  scheme and host. This is needed for the TLS proxy case.

  Without this then enabling the TLS proxy in devstack will fail
  configuring tempest because 'nova flavor-list' returns a http scheme
  in Location in a redirect it returns.

  I've proposed a temporary workaround in devstack using:

  +iniset $NOVA_API_PASTE_INI filter:ssl_header_handler past
  e.filter_factory oslo_middleware.http_proxy_to_wsgi:HTTPProxyToWSGI.factory
  +iniset $NOVA_API_PASTE_INI composite:openstack_compute_ap
  i_v21 keystone "ssl_header_handler cors compute_req_id faultwrap sizelimit 
autht
  oken keystonecontext osapi_compute_app_v21"

  But this isn't a long-term solution because two copies of the default
  paste filters will need to be maintained.

  See https://review.openstack.org/#/c/301172

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-cloud-controller/+bug/1573766/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1735950] Re: ValueError: Old and New apt format defined with unequal values True vs False @ apt_preserve_sources_list

2018-01-10 Thread Chad Smith
Marking invalid for cloud-init as cloud-init's Traceback and error is
'desired' as it informs the cloud-init user of invalid cloud-config
provided to cloud-init.

** Changed in: cloud-init
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1735950

Title:
  ValueError: Old and New apt format defined with unequal values True vs
  False @ apt_preserve_sources_list

Status in cloud-init:
  Invalid
Status in MAAS:
  Triaged
Status in MAAS 2.3 series:
  Triaged

Bug description:
  All nodes have these same failed events:

  Node post-installation failure - 'cloudinit' running modules for
  config

  Node post-installation failure - 'cloudinit' running config-apt-
  configure with frequency once-per-instance

  
  Experiencing odd issues with the squid proxy not being reachable.

  From a deployed node that had the event errors.

  $ sudo cat /var/log/cloud-init.log | http://paste.ubuntu.com/26098787/
  $ sudo cat /var/log/cloud-init-output.log | http://paste.ubuntu.com/26098802/

  ubuntu@os-util-00:~$ sudo apt install htop
  sudo: unable to resolve host os-util-00
  Reading package lists... Done
  Building dependency tree   
  Reading state information... Done
  The following NEW packages will be installed:
htop
  0 upgraded, 1 newly installed, 0 to remove and 14 not upgraded.
  Need to get 76.4 kB of archives.
  After this operation, 215 kB of additional disk space will be used.
  Err:1 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 htop 
amd64 2.0.1-1ubuntu1
Could not connect to 10.10.0.110:8000 (10.10.0.110). - connect (113: No 
route to host)
  E: Failed to fetch 
http://archive.ubuntu.com/ubuntu/pool/universe/h/htop/htop_2.0.1-1ubuntu1_amd64.deb
  Could not connect to 10.10.0.110:8000 (10.10.0.110). - connect (113: No route 
to host)

  E: Unable to fetch some archives, maybe run apt-get update or try with
  --fix-missing?


  
  Not sure if these things are related (the proxy not being reachable, and the 
node event errors)  something is not right.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1735950/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1742546] [NEW] Keystone appears to initiate each new request using the previous' request-id

2018-01-10 Thread Christian Sarrasin
Public bug reported:

Environment
~~~
* openstack-ansible version: 16.0.2
* Target OS: Ubuntu 16.04 Xenial
* Keystone deployed in containers, running on uWSGI (per OSA defaults)
* Keystone baseline (as provided by OSA): 
6a67918f9d5f39564af8eacc57b80cba98242683 # HEAD of "stable/pike" as of 
28.09.2017

Symptom
~~~
When running Keystone with debug=True, one can observe in keystone.log that 
each incoming request appears to "borrow" the req Id from the previous one 
served by that particular uWSGI process.

Analysis

This may be just cosmetic but one wonders if this is the indication of smth 
executing under the wrong context (and hence could have security implications?)

Example
~~~
In this slightly edited log excerpt from a specific worker (11207): 
http://paste.openstack.org/show/642496/ one can for instance see that the 
request incoming at 20:30:38.035 borrows 
"req-e4456225-01a5-498e-9b73-ad9772a54781" from the previous request that 
completed execution at 20:30:25.962.  The top of the log shows the same pattern 
and it's consistent throughout

(note: there's a ~5s delay before 20:30:25.957 which is a different
issue (actually the one I was investigating which lead me to notice the
pattern reported here).

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1742546

Title:
  Keystone appears to initiate each new request using the previous'
  request-id

Status in OpenStack Identity (keystone):
  New

Bug description:
  Environment
  ~~~
  * openstack-ansible version: 16.0.2
  * Target OS: Ubuntu 16.04 Xenial
  * Keystone deployed in containers, running on uWSGI (per OSA defaults)
  * Keystone baseline (as provided by OSA): 
6a67918f9d5f39564af8eacc57b80cba98242683 # HEAD of "stable/pike" as of 
28.09.2017

  Symptom
  ~~~
  When running Keystone with debug=True, one can observe in keystone.log that 
each incoming request appears to "borrow" the req Id from the previous one 
served by that particular uWSGI process.

  Analysis
  
  This may be just cosmetic but one wonders if this is the indication of smth 
executing under the wrong context (and hence could have security implications?)

  Example
  ~~~
  In this slightly edited log excerpt from a specific worker (11207): 
http://paste.openstack.org/show/642496/ one can for instance see that the 
request incoming at 20:30:38.035 borrows 
"req-e4456225-01a5-498e-9b73-ad9772a54781" from the previous request that 
completed execution at 20:30:25.962.  The top of the log shows the same pattern 
and it's consistent throughout

  (note: there's a ~5s delay before 20:30:25.957 which is a different
  issue (actually the one I was investigating which lead me to notice
  the pattern reported here).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1742546/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1402959] Re: Support Launching an instance with a port with vnic_type=direct

2018-01-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/513887
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=09efe2d432c7a17c696e7240f69b3c76b481e3f3
Submitter: Zuul
Branch:master

commit 09efe2d432c7a17c696e7240f69b3c76b481e3f3
Author: Akihiro Motoki 
Date:   Sat Oct 21 00:48:42 2017 +

Allow regular users to specify VNIC type for port

VNIC type needs to be specified when creating a server with an SR-IOV
port. In this case, a user first creates a port with vnic_type=direct
and creates a server with the created port. To allow this scenario,
horizon needs to allow users to specify VNIC type for port.

This commit also clean up the code duplication on the port security and
mac state fields in the port create form. The code in the admin form for
them are completely duplicated with that in the project form.

Change-Id: Ib6e91ed7f720e2720994025429da0dcef2fa4e25
Closes-Bug: #1402959


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1402959

Title:
  Support Launching an instance with a port with vnic_type=direct

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  To support Launching instances with 'SR-IOV' interfaces using the
  dashboard there is a need to:

  1)Adding the ability to specify vnic_type to 'port create' operation
  2)Adding option to create a port as a tenant (Right now only Admin can do 
this)
  3)Adding the ability to launch an instance with a pre configured port

  Duplicate bugs:
  https://bugs.launchpad.net/horizon/+bug/1399252
  https://bugs.launchpad.net/horizon/+bug/1399254

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1402959/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1712463] Re: failure when running under wsgi configuration

2018-01-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/531493
Committed: 
https://git.openstack.org/cgit/openstack/oslo.concurrency/commit/?id=55e06261aa86c87c7c059fbddc97cdbaae06e8dd
Submitter: Zuul
Branch:master

commit 55e06261aa86c87c7c059fbddc97cdbaae06e8dd
Author: Matthew Treinish 
Date:   Fri Jan 5 15:11:17 2018 -0500

Add python_exec kwarg to processutils.execute()

This commit adds a new kwarg to the process_utils.execute() function to
specify the python executable to use when launching python to check
prlimits. This is necessary when processutils.execute() is called from
inside an API server running with uwsgi. In this case sys.executable is
uwsgi (because uwsgi links libpython.so and is actually the interpreter)
This doesn't work with the execute() function because it assumes the
cpython interpreter CLI is used for the arguments it uses to call the
prlimits module. To workaround this and enable API servers that may run
under uwsgi to use this those applications can simply pass in an
executable to use.

Longer term it might be better to migrate the prlimits usage to call
multiprocessing instead of subprocessing python. But that would require
a more significant rewrite of both processutils and prlimit to
facilitate that.

Change-Id: I0ae60f0b4cc3700c783f6018e837358f0e053a09
Closes-Bug: #1712463


** Changed in: oslo.concurrency
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1712463

Title:
  failure when running under wsgi configuration

Status in Glance:
  In Progress
Status in oslo.concurrency:
  Fix Released

Bug description:
  The Pike devstack runs Glance using the wsgi configuration.  All
  created tasks are stuck in 'pending'.   This doesn't happen in Ocata
  devstack.

  See comment #2 for the oslo.concurrency problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1712463/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1731488] Re: nova list no IP address info for some instances.

2018-01-10 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1731488

Title:
  nova list no IP address info for some instances.

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Hi Guys:

  "nova list" command output has no IP address info for some instances
  ,but other have it; I don't know why.

  for example,there is no network info for GL-app-server-1 but GL-app-
  server-2 in "nova list" output , and I can query it has an interface
  by "nova interface-list",why?

  [root@cloud-01 ~]# nova list --all |grep app-server
  | d1d09a01-7ea6-4c28-af67-a3f6ea4cd9f7 | GL-app-server-1| 
b7b2911d635c4657b5bf875b99e0ca14 | ACTIVE  | -  | Running | 
   |
  | 37afa02b-eed9-4c71-8246-250463a5a9c6 | GL-app-server-2| 
b7b2911d635c4657b5bf875b99e0ca14 | ACTIVE  | -  | Running | 
ex-eit-test=11.225.11.16   |

  [root@cloud-01 ~]# nova interface-list d1d09a01-7ea6-4c28-af67-a3f6ea4cd9f7
  
++--+--+--+---+
  | Port State | Port ID  | Net ID  
 | IP addresses | MAC Addr  |
  
++--+--+--+---+
  | ACTIVE | 5b706f6b-6c52-49f9-9c56-13ff30b61d3c | 
bf358074-4e27-4b97-8122-c64d8a3ac665 | 11.225.11.13 | fa:16:3e:1f:b7:0b |
  
++--+--+--+---+

  nova --version
  7.1.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1731488/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1730800] Re: UnknownConnectionError

2018-01-10 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1730800

Title:
  UnknownConnectionError

Status in OpenStack Compute (nova):
  Expired

Bug description:
  [root@controller ~]# openstack server create --flavor m1.nano --image 
cirros-0.3.0-i386 --security-group permitall --key-name mykey provider-instance 
   Unexpected API Error. Please report this 
at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 
500) (Request-ID: req-87716e72-824e-4618-9854-4be8ea96166b)
  [root@controller ~]# 
  [root@controller ~]# 
  [root@controller ~]# !cat
  cat /var/log/nova/nova-api.log 

  2017-11-08 09:29:32.956 1938 INFO nova.api.openstack.wsgi 
[req-a6350065-4421-4e8f-b9a4-a65913a1d568 78688b319571447080aea1e62f59ecb5 
15e6f6ed6c9240168b85b561fcc8bc0f - default default] HTTP exception thrown: 
Flavor m1.nano could not be found.
  2017-11-08 09:29:32.959 1938 INFO nova.osapi_compute.wsgi.server 
[req-a6350065-4421-4e8f-b9a4-a65913a1d568 78688b319571447080aea1e62f59ecb5 
15e6f6ed6c9240168b85b561fcc8bc0f - default default] 192.168.99.6 "GET 
/v2.1/flavors/m1.nano HTTP/1.1" status: 404 len: 500 time: 0.0435319
  2017-11-08 09:29:32.982 1938 INFO nova.osapi_compute.wsgi.server 
[req-8a0509c7-3178-4458-94ab-7a10299d0134 78688b319571447080aea1e62f59ecb5 
15e6f6ed6c9240168b85b561fcc8bc0f - default default] 192.168.99.6 "GET 
/v2.1/flavors HTTP/1.1" status: 200 len: 586 time: 0.0198040
  2017-11-08 09:29:32.999 1938 INFO nova.osapi_compute.wsgi.server 
[req-49747957-6405-4370-81ac-c9c4eaf80708 78688b319571447080aea1e62f59ecb5 
15e6f6ed6c9240168b85b561fcc8bc0f - default default] 192.168.99.6 "GET 
/v2.1/flavors/0 HTTP/1.1" status: 200 len: 752 time: 0.0131040
  2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions 
[req-87716e72-824e-4618-9854-4be8ea96166b 78688b319571447080aea1e62f59ecb5 
15e6f6ed6c9240168b85b561fcc8bc0f - default default] Unexpected exception in API 
method: UnknownConnectionError: Unexpected exception for api_server = 
http://controller:9292/v2/images/6c861554-35b2-49eb-8dbf-a2a51bc1d07a: No 
connection adapters were found for 'api_server = 
http://controller:9292/v2/images/6c861554-35b2-49eb-8dbf-a2a51bc1d07a'
  2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
  2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 336, 
in wrapped
  2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
  2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, 
in wrapper
  2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, 
in wrapper
  2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, 
in wrapper
  2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, 
in wrapper
  2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, 
in wrapper
  2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, 
in wrapper
  2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, 
in wrapper
  2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/servers.py", line 
553, in create
  2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions 
**create_kwargs)
  2017-11-08 09: