[Yahoo-eng-team] [Bug 1750334] [NEW] ovsdb commands timeouts cause fullstack tests failures

2018-02-19 Thread Slawek Kaplonski
Public bug reported:

Quite often tests like 
neutron.tests.fullstack.test_connectivity.TestUninterruptedConnectivityOnL2AgentRestart.test_l2_agent_restart
 are failing because of timeout while executing ovsdb commands.
Because of this issue test environment is not prepared and test fails.

Example issues:
- 
neutron.tests.fullstack.test_connectivity.TestUninterruptedConnectivityOnL2AgentRestart.test_l2_agent_restart(OVS,VLANs,openflow-native):
 
http://logs.openstack.org/81/545681/1/check/neutron-fullstack/8285bf3/logs/testr_results.html.gz

-
neutron.tests.fullstack.test_connectivity.TestUninterruptedConnectivityOnL2AgentRestart.test_l2_agent_restart(OVS,Flat
network,openflow-native): http://logs.openstack.org/79/545679/1/check
/neutron-fullstack/3a12865/logs/testr_results.html.gz

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: gate-failure ovs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1750334

Title:
  ovsdb commands timeouts cause fullstack tests failures

Status in neutron:
  Confirmed

Bug description:
  Quite often tests like 
neutron.tests.fullstack.test_connectivity.TestUninterruptedConnectivityOnL2AgentRestart.test_l2_agent_restart
 are failing because of timeout while executing ovsdb commands.
  Because of this issue test environment is not prepared and test fails.

  Example issues:
  - 
neutron.tests.fullstack.test_connectivity.TestUninterruptedConnectivityOnL2AgentRestart.test_l2_agent_restart(OVS,VLANs,openflow-native):
 
http://logs.openstack.org/81/545681/1/check/neutron-fullstack/8285bf3/logs/testr_results.html.gz

  -
  
neutron.tests.fullstack.test_connectivity.TestUninterruptedConnectivityOnL2AgentRestart.test_l2_agent_restart(OVS,Flat
  network,openflow-native): http://logs.openstack.org/79/545679/1/check
  /neutron-fullstack/3a12865/logs/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1750334/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750337] [NEW] Fullstack tests fail due to "block_until_boot" timeout

2018-02-19 Thread Slawek Kaplonski
Public bug reported:

Sometimes in tests like 
"neutron.tests.fullstack.test_connectivity.TestOvsConnectivitySameNetworkOnOvsBridgeControllerStop.test_controller_timeout_does_not_break_connectivity_sigterm(VLANs,openflow-native)"
 there is timeout error during waiting for all vms to be boot.
Example of such error can be checked e.g. on 
http://logs.openstack.org/81/545681/1/check/neutron-fullstack/8285bf3/logs/testr_results.html.gz

This example is done on patch with some additional logging added to debug 
tests. What is strange there is fact that test environment makes GET 
/v2.0/ports/{port_id} call properly: 
http://logs.openstack.org/81/545681/1/check/neutron-fullstack/8285bf3/logs/dsvm-fullstack-logs/TestOvsConnectivitySameNetworkOnOvsBridgeControllerStop.test_controller_timeout_does_not_break_connectivity_sigterm_VLANs,openflow-native_.txt.gz#_2018-02-18_20_34_47_950
but there is no this call logged in neutron-server logs. First GET call for 
this port in neutron-server logs is about 1m 30seconds later: 
http://logs.openstack.org/81/545681/1/check/neutron-fullstack/8285bf3/logs/dsvm-fullstack-logs/TestOvsConnectivitySameNetworkOnOvsBridgeControllerStop.test_controller_timeout_does_not_break_connectivity_sigterm_VLANs,openflow-native_/neutron-server--2018-02-18--20-31-43-830810.txt.gz#_2018-02-18_20_36_18_516
 and this is already too late as test reached timeout and it is failed.

Above failed test run is just an example. I saw similar errors more
times than only this one.

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: fullstack gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1750337

Title:
  Fullstack tests fail due to "block_until_boot" timeout

Status in neutron:
  Confirmed

Bug description:
  Sometimes in tests like 
"neutron.tests.fullstack.test_connectivity.TestOvsConnectivitySameNetworkOnOvsBridgeControllerStop.test_controller_timeout_does_not_break_connectivity_sigterm(VLANs,openflow-native)"
 there is timeout error during waiting for all vms to be boot.
  Example of such error can be checked e.g. on 
http://logs.openstack.org/81/545681/1/check/neutron-fullstack/8285bf3/logs/testr_results.html.gz

  This example is done on patch with some additional logging added to debug 
tests. What is strange there is fact that test environment makes GET 
/v2.0/ports/{port_id} call properly: 
http://logs.openstack.org/81/545681/1/check/neutron-fullstack/8285bf3/logs/dsvm-fullstack-logs/TestOvsConnectivitySameNetworkOnOvsBridgeControllerStop.test_controller_timeout_does_not_break_connectivity_sigterm_VLANs,openflow-native_.txt.gz#_2018-02-18_20_34_47_950
  but there is no this call logged in neutron-server logs. First GET call for 
this port in neutron-server logs is about 1m 30seconds later: 
http://logs.openstack.org/81/545681/1/check/neutron-fullstack/8285bf3/logs/dsvm-fullstack-logs/TestOvsConnectivitySameNetworkOnOvsBridgeControllerStop.test_controller_timeout_does_not_break_connectivity_sigterm_VLANs,openflow-native_/neutron-server--2018-02-18--20-31-43-830810.txt.gz#_2018-02-18_20_36_18_516
 and this is already too late as test reached timeout and it is failed.

  Above failed test run is just an example. I saw similar errors more
  times than only this one.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1750337/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750355] [NEW] nova.tests.unit.test_api_validation.PatternPropertiesTestCase.test_validate_patternProperties_fails fails in 3.6 because py3 check is limited to 3.5

2018-02-19 Thread Chris Dent
Public bug reported:


nova.tests.unit.test_api_validation.PatternPropertiesTestCase.test_validate_patternProperties_fails
 fails in 3.6 because py3 check is limited to 3.5 with:

```
Captured traceback:
~~~
b'Traceback (most recent call last):'
b'  File "/Users/cdent/src/nova/nova/api/validation/validators.py", line 
300, in validate'
b'self.validator.validate(*args, **kwargs)'
b'  File 
"/Users/cdent/src/nova/.tox/py36/lib/python3.6/site-packages/jsonschema/validators.py",
 line 129, in validate'
b'for error in self.iter_errors(*args, **kwargs):'
b'  File 
"/Users/cdent/src/nova/.tox/py36/lib/python3.6/site-packages/jsonschema/validators.py",
 line 105, in iter_errors'
b'for error in errors:'
b'  File 
"/Users/cdent/src/nova/.tox/py36/lib/python3.6/site-packages/jsonschema/_validators.py",
 line 14, in patternProperties'
b'if re.search(pattern, k):'
b'  File "/Users/cdent/src/nova/.tox/py36/lib/python3.6/re.py", line 182, 
in search'
b'return _compile(pattern, flags).search(string)'
b'TypeError: expected string or bytes-like object'
```

** Affects: nova
 Importance: Undecided
 Assignee: Chris Dent (cdent)
 Status: In Progress


** Tags: low-hanging-fruit testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1750355

Title:
  
nova.tests.unit.test_api_validation.PatternPropertiesTestCase.test_validate_patternProperties_fails
  fails in 3.6 because py3 check is limited to 3.5

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  
  
nova.tests.unit.test_api_validation.PatternPropertiesTestCase.test_validate_patternProperties_fails
 fails in 3.6 because py3 check is limited to 3.5 with:

  ```
  Captured traceback:
  ~~~
  b'Traceback (most recent call last):'
  b'  File "/Users/cdent/src/nova/nova/api/validation/validators.py", line 
300, in validate'
  b'self.validator.validate(*args, **kwargs)'
  b'  File 
"/Users/cdent/src/nova/.tox/py36/lib/python3.6/site-packages/jsonschema/validators.py",
 line 129, in validate'
  b'for error in self.iter_errors(*args, **kwargs):'
  b'  File 
"/Users/cdent/src/nova/.tox/py36/lib/python3.6/site-packages/jsonschema/validators.py",
 line 105, in iter_errors'
  b'for error in errors:'
  b'  File 
"/Users/cdent/src/nova/.tox/py36/lib/python3.6/site-packages/jsonschema/_validators.py",
 line 14, in patternProperties'
  b'if re.search(pattern, k):'
  b'  File "/Users/cdent/src/nova/.tox/py36/lib/python3.6/re.py", line 182, 
in search'
  b'return _compile(pattern, flags).search(string)'
  b'TypeError: expected string or bytes-like object'
  ```

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1750355/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750353] [NEW] _get_changed_synthetic_fields() does not guarantee returned fields to be updatable

2018-02-19 Thread Lujin Luo
Public bug reported:

While revising [1], I discovered an issue of
_get_changed_synthetic_fields(): it does not guarantee returned fields
to be updatable.

How to reproduce:
 Set a breakpoint in [2] and then run 
neutron.tests.unit.objects.test_ports.DistributedPortBindingIfaceObjTestCase.test_update_updates_from_db_object,
 the returned fields are
-> return fields
(Pdb) fields
{'host': u'c2753a12ec', 'port_id': 'ae5700cd-f872-4694-bf36-92b919b0d3bf'}
where 'host' and 'port_id' are not updatable.

[1] https://review.openstack.org/#/c/544206/
[2] 
https://github.com/openstack/neutron/blob/master/neutron/objects/base.py#L696

** Affects: neutron
 Importance: Undecided
 Assignee: Lujin Luo (luo-lujin)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Lujin Luo (luo-lujin)

** Description changed:

  While revising [1], I discovered an issue of
  _get_changed_synthetic_fields(): it does not guarantee returned fields
  to be updatable.
  
- How to reproduce: 
-  Set a breakpoint in [2] and then run 
neutron.tests.unit.objects.test_ports.DistributedPortBindingIfaceObjTestCase.test_update_updates_from_db_object,
 the returned fields are 
+ How to reproduce:
+  Set a breakpoint in [2] and then run 
neutron.tests.unit.objects.test_ports.DistributedPortBindingIfaceObjTestCase.test_update_updates_from_db_object,
 the returned fields are
  -> return fields
  (Pdb) fields
  {'host': u'c2753a12ec', 'port_id': 'ae5700cd-f872-4694-bf36-92b919b0d3bf'}
- where port_id is not updatable.
+ where 'host' and 'port_id' are not updatable.
+ 
+ [1] https://review.openstack.org/#/c/544206/

** Description changed:

  While revising [1], I discovered an issue of
  _get_changed_synthetic_fields(): it does not guarantee returned fields
  to be updatable.
  
  How to reproduce:
   Set a breakpoint in [2] and then run 
neutron.tests.unit.objects.test_ports.DistributedPortBindingIfaceObjTestCase.test_update_updates_from_db_object,
 the returned fields are
  -> return fields
  (Pdb) fields
  {'host': u'c2753a12ec', 'port_id': 'ae5700cd-f872-4694-bf36-92b919b0d3bf'}
  where 'host' and 'port_id' are not updatable.
  
  [1] https://review.openstack.org/#/c/544206/
+ [2] 
https://github.com/openstack/neutron/blob/master/neutron/objects/base.py#L696

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1750353

Title:
  _get_changed_synthetic_fields() does not guarantee returned fields to
  be updatable

Status in neutron:
  In Progress

Bug description:
  While revising [1], I discovered an issue of
  _get_changed_synthetic_fields(): it does not guarantee returned fields
  to be updatable.

  How to reproduce:
   Set a breakpoint in [2] and then run 
neutron.tests.unit.objects.test_ports.DistributedPortBindingIfaceObjTestCase.test_update_updates_from_db_object,
 the returned fields are
  -> return fields
  (Pdb) fields
  {'host': u'c2753a12ec', 'port_id': 'ae5700cd-f872-4694-bf36-92b919b0d3bf'}
  where 'host' and 'port_id' are not updatable.

  [1] https://review.openstack.org/#/c/544206/
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/objects/base.py#L696

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1750353/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1741841] Re: VNC console fails to connect

2018-02-19 Thread Sylvain Bauza
Have you restarted the specific nova-consoleauth service ?

TBC, the VNC console is one way to quickly access the instance, but if you need 
more, there are other ways to connect the guest :
https://docs.openstack.org/nova/pike/admin/remote-console-access.html

Putting it as Invalid since I don't really see a clear Nova issue, but
let's reopen the bug by marking it as "New" if you can explain more
what's the specific issue you have.

** Changed in: nova
   Status: New => Invalid

** Changed in: nova
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1741841

Title:
  VNC console fails to connect

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Description:
  When try to open an instance VNC console, we get an error message (such as 
"failed to connect server; code=1006) after some time.For the initial 5 min it 
work successfully.(pike release)

  Workaround: 
  browser refresh it eventually works.

  Logs:
  nova-consoleauth.log
  INFO nova.consoleauth.manager [req-0db1a667-bba0-427d-ba0b-4f03ecf1bcca - - - 
- -] Checking Token: f2c6eac1-ad7a-4bbf-8952-7e7fd5c4ae75, False

  nova-novncproxy.log
  INFO nova.console.websocketproxy [req-0db1a667-bba0-427d-ba0b-4f03ecf1bcca - 
- - - -] handler exception: The token 'f2c6eac1-ad7a-4bbf-8952-7e7fd5c4ae75' is 
invalid or has expired

  Environment
  -3 compute nodes and 2 controller nodes 
  -nova.conf on compute node has,
   vnc_enabled = True
   novnc_enabled = True
   vnc_keymap = en-us
   vncserver_listen = 0.0.0.0
   vncserver_proxyclient_address = 10.20.0.137
   novncproxy_base_url = http://10.20.3.101:6080/vnc_auto.html

  -nova.conf on controller node has [cache] section
   [cache]
   enabled = true
   backend = oslo_cache.memcache_pool
   memcache_servers = 10.20.0.142:11211,10.20.0.164:11211,10.20.0.152:11211

  Steps to reproduce 
  login to the dashboard as a project user, spawn a VM, from the drop down - 
select VNC console. Able to connect to vnc console for initial 5-7 min after 
this not able to connect.An empty VNC window appears but cannot connect to the 
VM with an error "Failed to connect to server(code:1006)".
  100% repeatable

  any guidance on resolve this would be really appreciated

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1741841/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632852] Re: placement api responses should not be cacheable

2018-02-19 Thread Chris Dent
This is actually done now, but my use of partial-bug on both changes
made the automation not happen.

** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1632852

Title:
  placement api responses should not be cacheable

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  In version 1.0 of the placement API, responses are sent without any
  cache-busting headers. This means that the responses may be cached by
  the user-agent. It's not predictable.

  Caching of resource providers is not desired so it would be good to
  send cache headers to enforce that responses are not cached.

  This old document remains the bizness for learning how to do such
  things: https://www.mnot.net/cache_docs/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1632852/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750368] [NEW] Neutron-neutron interconnection

2018-02-19 Thread Thomas Morin
Public bug reported:

Today, to realize connectivity between two OpenStack clouds (e.g. two distinct
OpenStack deployments, or two OpenStack regions, for instance) some options are
available, such as floating IPs, VPNaaS (IPSec-based), and BGPVPNs. 

However, none of these options are appropriate to address use cases where all
the following properties are desired:

* interconnection consumable on-demand, without admin intervention
  (possible with floating IPs, VPNaaS, but not with the BGP VPN
  interconnections API extension)

* have network isolation and allow the use of private IP addressing end-to-end
  (possible with VPNaaS, and BGP VPN interconnections, but not with
  floating IPs)

* avoid the overhead of packet encryption
  (possible with floating IPs, and BGP VPN interconnections, but by
  construction not with VPNaaS)

The goal of this RFE is to propose a solution to provide network connectivity
between two or more OpenStack deployments or regions, respecting these
constraints.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1750368

Title:
  Neutron-neutron interconnection

Status in neutron:
  New

Bug description:
  Today, to realize connectivity between two OpenStack clouds (e.g. two distinct
  OpenStack deployments, or two OpenStack regions, for instance) some options 
are
  available, such as floating IPs, VPNaaS (IPSec-based), and BGPVPNs. 

  However, none of these options are appropriate to address use cases where all
  the following properties are desired:

  * interconnection consumable on-demand, without admin intervention
(possible with floating IPs, VPNaaS, but not with the BGP VPN
interconnections API extension)

  * have network isolation and allow the use of private IP addressing end-to-end
(possible with VPNaaS, and BGP VPN interconnections, but not with
floating IPs)

  * avoid the overhead of packet encryption
(possible with floating IPs, and BGP VPN interconnections, but by
construction not with VPNaaS)

  The goal of this RFE is to propose a solution to provide network connectivity
  between two or more OpenStack deployments or regions, respecting these
  constraints.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1750368/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1747869] Re: ./stack.sh doesn't work with postgres

2018-02-19 Thread Brian Rosmaita
** Changed in: glance
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1747869

Title:
  ./stack.sh doesn't work with postgres

Status in Glance:
  Fix Released

Bug description:
  probably after the recent change [1]
  ./stack.sh doesn't work with postgres anymore.

  [1] I2653560d637a6696f936b49e87f16326fd601dfe

  
  +lib/databases/postgresql:recreate_database_postgresql:46  createdb -h 
127.0.0.1 -Uroot 
  -l C -T template0 -E utf8 glance
  +lib/glance:init_glance:313time_start dbsync
  +functions-common:time_start:2237  local name=dbsync
  +functions-common:time_start:2238  local start_time=
  +functions-common:time_start:2239  [[ -n '' ]]
  ++functions-common:time_start:2242  date +%s%3N
  +functions-common:time_start:2242  _TIME_START[$name]=1517992535119
  +lib/glance:init_glance:315/usr/local/bin/glance-manage 
--config-file /e
  tc/glance/glance-api.conf db_sync
  
/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py:1334: 
OsloDBDeprecationWarning: EngineFacade is deprecated; please use 
oslo_db.sqlalchemy.enginefacade
expire_on_commit=expire_on_commit, _conf=conf)
  WARNING oslo_db.sqlalchemy.engines [-] URL 
postgresql://root:***@127.0.0.1/glance?client_encoding=utf8 does not contain a 
'+drivername' portion, and will make use of a default driver.  A full 
dbname+drivername:// protocol is recommended.
  INFO alembic.runtime.migration [-] Context impl PostgresqlImpl.
  INFO alembic.runtime.migration [-] Will assume transactional DDL.
  Rolling upgrades are currently supported only for MySQL and Sqlite
  +lib/glance:init_glance:1  exit_trap
  +./stack.sh:exit_trap:510  local r=1
  ++./stack.sh:exit_trap:511  jobs -p
  +./stack.sh:exit_trap:511  jobs=
  +./stack.sh:exit_trap:514  [[ -n '' ]]
  +./stack.sh:exit_trap:520  '[' -f /tmp/tmp.yKvw8EwKuQ ']'
  +./stack.sh:exit_trap:521  rm /tmp/tmp.yKvw8EwKuQ
  +./stack.sh:exit_trap:525  kill_spinner
  +./stack.sh:kill_spinner:424   '[' '!' -z '' ']'
  +./stack.sh:exit_trap:527  [[ 1 -ne 0 ]]
  +./stack.sh:exit_trap:528  echo 'Error on exit'
  Error on exit
  +./stack.sh:exit_trap:530  type -p generate-subunit
  +./stack.sh:exit_trap:531  generate-subunit 1517992018 519 
fail
  +./stack.sh:exit_trap:533  [[ -z /opt/stack/logs ]]
  +./stack.sh:exit_trap:536  
/home/takashi/git/devstack/tools/worlddump.py -d /opt/stack/logs
  World dumping... see /opt/stack/logs/worlddump-2018-02-07-083537.txt for 
details
  +./stack.sh:exit_trap:545  exit 1

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1747869/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750377] [NEW] "Error nfs glance mounting

2018-02-19 Thread Madhu CR
Public bug reported:

TASK [os_glance : Glance mount nfs] 
***
task path: /etc/ansible/roles/os_glance/tasks/glance_post_install.yml:82
Monday 19 February 2018  05:35:04 -0800 (0:00:00.987)   0:01:29.769 *** 
container_name: "infra1_glance_container-aa13ae46"
physical_host: "infra1"
Container confirmed
Using module file 
/opt/ansible-runtime/local/lib/python2.7/site-packages/ansible/modules/system/mount.py
container_name: "infra1_glance_container-aa13ae46"
physical_host: "infra1"
Container confirmed
<10.10.10.21> ESTABLISH SSH CONNECTION FOR USER: root
<10.10.10.21> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 
StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o 
PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o 
PasswordAuthentication=no -o User=root -o ConnectTimeout=5 -o 
UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o 
ServerAliveInterval=64 -o ServerAliveCountMax=1024 -o Compression=no -o 
TCPKeepAlive=yes -o VerifyHostKeyDNS=no -o ForwardX11=no -o ForwardAgent=yes -T 
-o ControlPath=/root/.ansible/cp/793b0fde4a 10.10.10.21 'lxc-attach --clear-env 
--name infra1_glance_container-aa13ae46 -- su - root -c '"'"'/bin/sh -c 
'"'"'"'"'"'"'"'"'/usr/bin/python && sleep 0'"'"'"'"'"'"'"'"''"'"''
<10.10.10.21> (1, '\n{"msg": "Error mounting /var/lib/glance/images: mount.nfs: 
access denied by server while mounting 10.10.90.10:/images\\n", "failed": true, 
"invocation": {"module_args": {"src": "10.10.90.10:/images", "name": 
"/var/lib/glance/images", "dump": null, "boot": "yes", "fstab": null, "passno": 
null, "fstype": "nfs", "state": "mounted", "path": "/var/lib/glance/images", 
"opts": "_netdev,auto"}}}\n', 'mesg: ttyname failed: Inappropriate ioctl for 
device\n')
failed: [infra1_glance_container-aa13ae46] (item={u'local_path': 
u'/var/lib/glance/images', u'type': u'nfs', u'options': u'_netdev,auto', 
u'remote_path': u'/images', u'server': u'10.10.90.10'}) => {
"failed": true, 
"invocation": {
"module_args": {
"boot": "yes", 
"dump": null, 
"fstab": null, 
"fstype": "nfs", 
"name": "/var/lib/glance/images", 
"opts": "_netdev,auto", 
"passno": null, 
"path": "/var/lib/glance/images", 
"src": "10.10.90.10:/images", 
"state": "mounted"
}
}, 
"item": {
"local_path": "/var/lib/glance/images", 
"options": "_netdev,auto", 
"remote_path": "/images", 
"server": "10.10.90.10", 
"type": "nfs"
}, 
"msg": "Error mounting /var/lib/glance/images: mount.nfs: access denied by 
server while mounting 10.10.90.10:/images\n"

** Affects: glance
 Importance: Undecided
 Assignee: Madhu CR (madhu.cr1)
 Status: In Progress

** Changed in: glance
 Assignee: (unassigned) => Madhu CR (madhu.cr1)

** Changed in: glance
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1750377

Title:
  "Error nfs glance mounting

Status in Glance:
  In Progress

Bug description:
  TASK [os_glance : Glance mount nfs] 
***
  task path: /etc/ansible/roles/os_glance/tasks/glance_post_install.yml:82
  Monday 19 February 2018  05:35:04 -0800 (0:00:00.987)   0:01:29.769 
*** 
  container_name: "infra1_glance_container-aa13ae46"
  physical_host: "infra1"
  Container confirmed
  Using module file 
/opt/ansible-runtime/local/lib/python2.7/site-packages/ansible/modules/system/mount.py
  container_name: "infra1_glance_container-aa13ae46"
  physical_host: "infra1"
  Container confirmed
  <10.10.10.21> ESTABLISH SSH CONNECTION FOR USER: root
  <10.10.10.21> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 
StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o 
PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o 
PasswordAuthentication=no -o User=root -o ConnectTimeout=5 -o 
UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o 
ServerAliveInterval=64 -o ServerAliveCountMax=1024 -o Compression=no -o 
TCPKeepAlive=yes -o VerifyHostKeyDNS=no -o ForwardX11=no -o ForwardAgent=yes -T 
-o ControlPath=/root/.ansible/cp/793b0fde4a 10.10.10.21 'lxc-attach --clear-env 
--name infra1_glance_container-aa13ae46 -- su - root -c '"'"'/bin/sh -c 
'"'"'"'"'"'"'"'"'/usr/bin/python && sleep 0'"'"'"'"'"'"'"'"''"'"''
  <10.10.10.21> (1, '\n{"msg": "Error mounting /var/lib/glance/images: 
mount.nfs: access denied by server while mounting 10.10.90.10:/images\\n", 
"failed": true, "invocation": {"module_args": {"src": "10.10.90.10:/images", 
"name": "/var/lib/glance/images", "dump": null, "boot": "yes", "fstab": n

[Yahoo-eng-team] [Bug 1741232] Re: Unable to retrieve instances after openstack upgrade from newton to pike

2018-02-19 Thread Sylvain Bauza
Something is wrong with your upgrade as it claims that the 'services'
table disappeared.

Given it's a specific deployment concern, setting the bug as Invalid
since it doesn't like like a nova upstream project issue.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1741232

Title:
  Unable to retrieve instances after openstack upgrade from newton to
  pike

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Description:
  After openstack upgrade from newton to pike release, 
  1. Not able to list instances created before upgrade(though instances seems 
to be running).
  2. Not able spawn new instances after upgrade from newton to pike.

  Observation:

  1.Able to ping ang login into VMs which are created before upgrade.

  Below is the snippet when propmpted the command:

  **nova list**
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-47b8e78a-13dd-4fe7-9495-c84bbf51eb1b)

  **openstack server list**
  Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ 
and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-e6c51993-05b4-445e-b10f-4de8a7938b7f)

  Note: When tried with openstack-dashboard webui faces same issue. It
  throws an error "unable to retrieve instances".

  Steps to reproduce:
  1.Deploy openstack newton.
  2.Spawn some VMs
  3.Upgrade openstack from newton to pike
  4.list the VMs using "nova list" command 

  Expected results:
  1. Should be able to list VMs spawned before upgrade.
  2. Should able to spawn new VMs

  Obtained result:
  1. Unable to retrieve the instances
  2. Not able to spawn VMs

  Description of the environment:
  * 9 baremetal nodes:
-1st node has MAAS Deployed.
-MAAS will deploy OS on all other nodes
-Using juju openstack is deployed
  *Node roles
-3 compute node
-3 controller node
-2 network node
  * Details:
- Newton on Ubuntu 14.04
- Compute: KVM

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1741232/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750383] [NEW] Dynamic routing: Error logged during speaker removal

2018-02-19 Thread Dr. Jens Harbott
Public bug reported:

During normal operations, when a BGP speaker is deleted and being
removed from an agent during that operation, an error like this is being
logged:


Feb 19 10:25:05.054654 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]: INFO bgpspeaker.peer [-] Connection to peer 
192.168.10.129 lost, reason: Connection to peer lost: [Errno 9] Bad file 
descriptor. Resetting retry connect loop: False
Feb 19 10:25:05.054912 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]: DEBUG bgpspeaker.signals.base [-] SIGNAL: ('core', 
'adj', 'down') emitted with data: {'peer': 
}  {{(pid=30255) 
emit_signal 
/usr/local/lib/python2.7/dist-packages/ryu/services/protocols/bgp/signals/base.py:11}}
Feb 19 10:25:05.055034 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]: INFO 
neutron_dynamic_routing.services.bgp.agent.driver.ryu.driver [-] BGP Peer 
192.168.10.129 for remote_as=64522 went DOWN.
Feb 19 10:25:05.055144 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]: DEBUG bgpspeaker.peer [-] Peer 192.168.10.129 BGP 
FSM went from Established to Idle {{(pid=30255) bgp_state 
/usr/local/lib/python2.7/dist-packages/ryu/services/protocols/bgp/peer.py:237}}
Feb 19 10:25:05.04 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]: ERROR bgpspeaker.base [-] Traceback (most recent 
call last):
Feb 19 10:25:05.055768 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]:   File 
"/usr/local/lib/python2.7/dist-packages/ryu/services/protocols/bgp/base.py", 
line 256, in start
Feb 19 10:25:05.055929 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]: self._run(*args, **kwargs)
Feb 19 10:25:05.056106 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]:   File 
"/usr/local/lib/python2.7/dist-packages/ryu/services/protocols/bgp/speaker.py", 
line 275, in _run
Feb 19 10:25:05.056293 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]: self._recv_loop()
Feb 19 10:25:05.056519 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]:   File 
"/usr/local/lib/python2.7/dist-packages/ryu/services/protocols/bgp/speaker.py", 
line 571, in _recv_loop
Feb 19 10:25:05.056719 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]: self.connection_lost(conn_lost_reason)
Feb 19 10:25:05.056911 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]:   File 
"/usr/local/lib/python2.7/dist-packages/ryu/services/protocols/bgp/speaker.py", 
line 596, in connection_lost
Feb 19 10:25:05.057096 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]: self._peer.connection_lost(reason)
Feb 19 10:25:05.057282 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]:   File 
"/usr/local/lib/python2.7/dist-packages/ryu/services/protocols/bgp/peer.py", 
line 2328, in connection_lost
Feb 19 10:25:05.057463 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]: self._protocol.stop()
Feb 19 10:25:05.057659 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]:   File 
"/usr/local/lib/python2.7/dist-packages/ryu/services/protocols/bgp/speaker.py", 
line 405, in stop
Feb 19 10:25:05.057835 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]: Activity.stop(self)
Feb 19 10:25:05.058019 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]:   File 
"/usr/local/lib/python2.7/dist-packages/ryu/services/protocols/bgp/base.py", 
line 314, in stop
Feb 19 10:25:05.058186 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]: raise ActivityException(desc='Cannot call stop 
when activity is '
Feb 19 10:25:05.058363 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]: ActivityException: 100.1 - Cannot call stop when 
activity is not started or has been stopped already.
Feb 19 10:25:05.058734 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]: : ActivityException: 100.1 - Cannot call stop when 
activity is not started or has been stopped already.
Feb 19 10:25:31.149666 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]: DEBUG 
neutron_dynamic_routing.services.bgp.agent.bgp_dragent [None 
req-590735a1-3669-43e0-8feb-3afa445663d9 None None] Report state task started 
{{(pid=30255) _report_state 
/opt/stack/new/neutron-dynamic-routing/neutron_dynamic_routing/services/bgp/agent/bgp_dragent.py:682}}

This is confusing since the operation is finishing successfully, so the
expected result is that no error should be seen in the logs.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1750383

Title:
  Dynamic routing: Error logged during speaker removal

Status in neutron:
  New

Bug description:
  During normal operations, when a BGP speaker is deleted and being
  removed from an agent during that operation, an error like this is
  being logged:

  
  Feb 19 10:25:05.054654 ubunt

[Yahoo-eng-team] [Bug 1641250] Re: NG details view route should have different name

2018-02-19 Thread Graham Hayes
** Changed in: designate-dashboard
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1641250

Title:
  NG details view route should have different name

Status in Designate Dashboard:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Magnum UI:
  Fix Released
Status in senlin-dashboard:
  Fix Released
Status in UI Cookiecutter:
  Fix Released
Status in Zun UI:
  Fix Released

Bug description:
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/static/app/core/core.module.js#L56

  path includes the name "project" but detail views can also come from
  "admin" and "identity". Change the name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/designate-dashboard/+bug/1641250/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645418] Re: Image Name does not show up when new instance are launched using horizon dashboard

2018-02-19 Thread Khairul Aizat Kamarudzzaman
hi .. i'm facing the same problem. Please see attached screenshot

** Attachment added: "Image Name not shown in Horizon Dashboard"
   
https://bugs.launchpad.net/horizon/+bug/1645418/+attachment/5058224/+files/Image-Name-Not-Shown-in-Horizon.png

** Changed in: horizon
   Status: Invalid => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1645418

Title:
  Image Name does not show up when new instance are launched using
  horizon dashboard

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  After creating a new instance from horizon dashboard with boot source
  as image, the image name does not show up in the table.

  Logs shows missing value for "image ref" attribute.

  NOTE: When the instance is launch using CLI the image name is there on
  horizon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1645418/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1749762] Re: admin docs: interoperable image import revision

2018-02-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/545188
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=90815cc7fb1e14f68c5df23d8ae0db418ee14982
Submitter: Zuul
Branch:master

commit 90815cc7fb1e14f68c5df23d8ae0db418ee14982
Author: Brian Rosmaita 
Date:   Thu Feb 15 20:57:46 2018 -0500

Revise interoperable image import documentation

Updated to include the changes introduced in the Queens release.
Closes-bug: #1749762

Change-Id: I1df60db5826ff1e4491134f943d5eb0f29b0b072


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1749762

Title:
  admin docs: interoperable image import revision

Status in Glance:
  Fix Released

Bug description:
  The admin docs need a revision about interoperable image import:

  * remove mention of "experimental" API at the top

  * wouldn't hurt to mention that web-download is the replacement for
  the old v1 copy-from and remind that v1 is DEPRECATED and will be
  removed in Rocky

  * hit the TODO explaining the 2 different methods

  * make sure it's clear where the configuration for each goes (what in
  glance-api.conf, what in glance-image-import.conf)

  * make sure it's clear that glance-image-import.conf is an optional
  file and where the sample file can be found -- see Erno's comments on
  https://review.openstack.org/#/c/544596/3/doc/source/admin
  /interoperable-image-import.rst

  And any typos, formatting problems you notice.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1749762/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645418] Re: Image Name does not show up when new instance are launched using horizon dashboard

2018-02-19 Thread Khairul Aizat Kamarudzzaman
I've disabled the default-create-volume in horizon configuration then
its showing the image name.

Sorry for the inconvenience.

** Changed in: horizon
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1645418

Title:
  Image Name does not show up when new instance are launched using
  horizon dashboard

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  After creating a new instance from horizon dashboard with boot source
  as image, the image name does not show up in the table.

  Logs shows missing value for "image ref" attribute.

  NOTE: When the instance is launch using CLI the image name is there on
  horizon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1645418/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750415] [NEW] validation of app cred tokens is dependent on CONF.token.cache_on_issue

2018-02-19 Thread Lance Bragstad
Public bug reported:

Some information in tokens obtained with application credentials isn't
available unless caching is enabled. I was able to recreate this using
some of the tests in test_v3_trust.py and by setting
CONF.token.cache_on_issue to False, which resulted in a 500 because a
specific key in the token reference wasn't available [0].

Without digging into a bunch, I think this is because the token is
cached when it is created, meaning the process to rebuild the entire
authorization context at validation time is short-circuited.

[0] http://paste.openstack.org/show/677666/

** Affects: keystone
 Importance: Critical
 Status: Triaged

** Changed in: keystone
   Importance: Undecided => Critical

** Changed in: keystone
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1750415

Title:
  validation of app cred tokens is dependent on
  CONF.token.cache_on_issue

Status in OpenStack Identity (keystone):
  Triaged

Bug description:
  Some information in tokens obtained with application credentials isn't
  available unless caching is enabled. I was able to recreate this using
  some of the tests in test_v3_trust.py and by setting
  CONF.token.cache_on_issue to False, which resulted in a 500 because a
  specific key in the token reference wasn't available [0].

  Without digging into a bunch, I think this is because the token is
  cached when it is created, meaning the process to rebuild the entire
  authorization context at validation time is short-circuited.

  [0] http://paste.openstack.org/show/677666/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1750415/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750415] Re: validation of app cred tokens is dependent on CONF.token.cache_on_issue

2018-02-19 Thread Morgan Fainberg
Based upon research and discussions in IRC, turns out we do not store
the application_credential_id in the token payload. This means that if
the token is not pre-populated in the cache, the test will fail.

This also means that if the token cache expires, subsequent uses of the
token with the application cred will also fail / have inconsistent or
inappropriate behavior.

This requires a fix to add a formatter that includes
application_credentials (likely more than one). The issue is identified
by looking at
https://github.com/openstack/keystone/blob/c80df22669ae457f8a64ddef7d31f685f9ad1e01/keystone/token/token_formatters.py
and seeing that application credential is not stored anywhere but the
auth methods are properly populated.

** Also affects: keystone/rocky
   Importance: Critical
 Assignee: Lance Bragstad (lbragstad)
   Status: In Progress

** Also affects: keystone/queens
   Importance: Undecided
   Status: New

** Changed in: keystone/queens
   Importance: Undecided => Critical

** Changed in: keystone/queens
   Status: New => Triaged

** Changed in: keystone/queens
 Assignee: (unassigned) => Lance Bragstad (lbragstad)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1750415

Title:
  validation of app cred tokens is dependent on
  CONF.token.cache_on_issue

Status in OpenStack Identity (keystone):
  In Progress
Status in OpenStack Identity (keystone) queens series:
  Triaged
Status in OpenStack Identity (keystone) rocky series:
  In Progress

Bug description:
  Some information in tokens obtained with application credentials isn't
  available unless caching is enabled. I was able to recreate this using
  some of the tests in test_v3_trust.py and by setting
  CONF.token.cache_on_issue to False, which resulted in a 500 because a
  specific key in the token reference wasn't available [0].

  Without digging into a bunch, I think this is because the token is
  cached when it is created, meaning the process to rebuild the entire
  authorization context at validation time is short-circuited.

  [0] http://paste.openstack.org/show/677666/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1750415/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1731986] Re: nova snapshot_volume_backed failure does not thaw filesystems

2018-02-19 Thread Matt Riedemann
** Changed in: nova
 Assignee: Matt Riedemann (mriedem) => Eric M Gonzalez (egrh3)

** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Changed in: nova/queens
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1731986

Title:
  nova snapshot_volume_backed failure does not thaw filesystems

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) ocata series:
  Confirmed
Status in OpenStack Compute (nova) pike series:
  Confirmed
Status in OpenStack Compute (nova) queens series:
  Confirmed

Bug description:
  Noticed in OpenStack Mitaka (commit 9825c80), but the function
  (snapshot_volume_backed) is unchanged as of commit a4fc1bcd. backends:
  Libvirt + Ceph.

  When Nova attempts to create an image / snapshot of a volume-backed
  instance it first quiesces the instance in `snapshot_volume_backed()`.
  It then loops over all of the block devices associated with that
  instance. However, there is no exception handling in the for loop and
  any failures on the part of Cinder are bubbled up and through the
  `snapshot_volume_backed()` function. This causes the needed
  `unquiesce()` to never be called on the instance, leaving it in an
  inconsistent (read-only) state. This can cause operational errors in
  the instance leaving it unusable.

  In my case, the steps for reproduction are:

  1) nova create image / ( "create snapshot" via horizon )
  2) nova/compute/api snapshot_volume_backed() calls quiesce
  3) "qemu-ga: info: guest-fsfreeze called" is seen in instance
  4) cinder fails snapshot of volume due to OverLimit
  5) cinder raises OverLimit
  6) snapshot_volume_backed() never finishes due to OverLimit
  7) filesystem is never thawed
  8) instance unusable

  I am in the process of writing and testing a patch and will have a
  review for it soon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1731986/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1709931] Re: Windows: exec calls stdout trimmed

2018-02-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/492107
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=82d468ba7fd6dce91fec015d39126e71c1434fb1
Submitter: Zuul
Branch:master

commit 82d468ba7fd6dce91fec015d39126e71c1434fb1
Author: Lucian Petrut 
Date:   Wed Aug 9 13:50:09 2017 +0300

Windows: fix exec calls

At some point, we've switched to an alternative process launcher
that uses named pipes to communicate with the child processes. This
implementation has some issues, truncating the process output in some
situations.

This change switches back to subprocess.Popen, which is a much easier
and convenient way to perform exec calls. We're also ensuring that the
os module is not patched (which would cause subprocess.Popen to fail
on Windows due to an eventlet limitation, the reason why the above
mentioned implementation was first introduced).

We're also ensuring that such calls do not block other greenthreads
by leveraging eventlet.tpool.

Side note: I had to store subprocess.Popen in a variable in order
to avoid having OpenStack bandit complaining, even though we're
explicitly passing "shell=False":
http://paste.openstack.org/raw/658319/

Closes-Bug: #1709931

Change-Id: Ib58e12030e69ea10862452c2f141a7a5f2527621


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1709931

Title:
  Windows: exec calls stdout trimmed

Status in neutron:
  Fix Released

Bug description:
  At some point, we've switched to an alternative process launcher that
  uses named pipes to communicate with the child processes. This
  implementation has some issues, truncating the process output in some
  situations.

  Trace:
  http://paste.openstack.org/show/616053/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1709931/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1731986] Re: nova snapshot_volume_backed failure does not thaw filesystems

2018-02-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/519464
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=bca425a33f52584051348a3ace832be8151299a7
Submitter: Zuul
Branch:master

commit bca425a33f52584051348a3ace832be8151299a7
Author: Eric M Gonzalez 
Date:   Mon Nov 13 14:02:27 2017 -0600

unquiesce instance on volume snapshot failure

This patch adds an exception catch to "snapshot_volume_backed()" of
compute/api.py that catches (at the moment) _all_ exceptions from the
underlying cinderclient. Previously, if the instance is quiesced ( frozen
filesystem ) then the exception will break execution of the function,
skipping the needed unquiesce, and leave the instance in a frozen state.

Now, the exception catch will unquiesce the instance if it was prior to
the failure.

Got a unit test in place with the help of Matt Riedemann.
test_snapshot_volume_backed_with_quiesce_create_snap_fails

Change-Id: I60de179c72eede6746696f29462ee9d805dace47
Closes-bug: #1731986


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1731986

Title:
  nova snapshot_volume_backed failure does not thaw filesystems

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  Confirmed
Status in OpenStack Compute (nova) pike series:
  In Progress
Status in OpenStack Compute (nova) queens series:
  In Progress

Bug description:
  Noticed in OpenStack Mitaka (commit 9825c80), but the function
  (snapshot_volume_backed) is unchanged as of commit a4fc1bcd. backends:
  Libvirt + Ceph.

  When Nova attempts to create an image / snapshot of a volume-backed
  instance it first quiesces the instance in `snapshot_volume_backed()`.
  It then loops over all of the block devices associated with that
  instance. However, there is no exception handling in the for loop and
  any failures on the part of Cinder are bubbled up and through the
  `snapshot_volume_backed()` function. This causes the needed
  `unquiesce()` to never be called on the instance, leaving it in an
  inconsistent (read-only) state. This can cause operational errors in
  the instance leaving it unusable.

  In my case, the steps for reproduction are:

  1) nova create image / ( "create snapshot" via horizon )
  2) nova/compute/api snapshot_volume_backed() calls quiesce
  3) "qemu-ga: info: guest-fsfreeze called" is seen in instance
  4) cinder fails snapshot of volume due to OverLimit
  5) cinder raises OverLimit
  6) snapshot_volume_backed() never finishes due to OverLimit
  7) filesystem is never thawed
  8) instance unusable

  I am in the process of writing and testing a patch and will have a
  review for it soon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1731986/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1745977] Re: When source compute service up, will not destroy and clean up those instances which be evacuated then be deleted.

2018-02-19 Thread Matt Riedemann
** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Also affects: nova/pike
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1745977

Title:
  When source compute service up, will not destroy and clean up those
  instances which be evacuated then be deleted.

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) pike series:
  New
Status in OpenStack Compute (nova) queens series:
  New

Bug description:
  Description
  ===
  When the instance evacuated to destination host successfully, then deleted 
this instance. The source host up will cleanup this instance failed.

  Steps to reproduce
  ==
  1.deploy a local instance in source host.
  2.power off the source host.
  3.evacuate the instance to destination host.
  4.delete this instance.
  5.power on the source host.

  Expected result
  ===
  The source host nova-compute service cleanup this evacuated and deleted 
instance.

  Actual result
  =
  This instance still on source host.

  Environment
  ===
  Openstack Pike
  Libvirt + KVM
  ovs network

  
  Logs & Configs
  ==
  source host nova-compute log:

  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service 
[req-7bdfe28f-0464-4af8-bdd0-2d433b25d84a - - - - -] Error starting thread.: 
InstanceNotFound_Remote: Instance 19022200-7abc-423d-90bd-e9dcd0887679 could 
not be found.
  Traceback (most recent call last):

File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 
125, in _object_dispatch
  return getattr(target, method)(*args, **kwargs)

File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 
184, in wrapper
  result = fn(cls, context, *args, **kwargs)

File "/usr/lib/python2.7/site-packages/nova/objects/instance.py", line 474, 
in get_by_uuid
  use_slave=use_slave)

File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 
235, in wrapper
  return f(*args, **kwargs)

File "/usr/lib/python2.7/site-packages/nova/objects/instance.py", line 466, 
in _db_instance_get_by_uuid
  columns_to_join=columns_to_join)

File "/usr/lib/python2.7/site-packages/nova/db/api.py", line 744, in 
instance_get_by_uuid
  return IMPL.instance_get_by_uuid(context, uuid, columns_to_join)

File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 
179, in wrapper
  return f(*args, **kwargs)

File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 
280, in wrapped
  return f(context, *args, **kwargs)

File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 
1911, in instance_get_by_uuid
  columns_to_join=columns_to_join)

File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 
1920, in _instance_get_by_uuid
  raise exception.InstanceNotFound(instance_id=uuid)

  InstanceNotFound: Instance 19022200-7abc-423d-90bd-e9dcd0887679 could not be 
found.
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service Traceback (most 
recent call last):
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service   File 
"/usr/lib/python2.7/site-packages/oslo_service/service.py", line 721, in 
run_service
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service service.start()
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service   File 
"/usr/lib/python2.7/site-packages/nova/service.py", line 156, in start
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service 
self.manager.init_host()
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1173, in 
init_host
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service 
self._destroy_evacuated_instances(context)
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 691, in 
_destroy_evacuated_instances
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service bdi, 
destroy_disks)
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 909, in 
destroy
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service destroy_disks)
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1032, in 
cleanup
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service attempts = 
int(instance.system_metadata.get('clean_attempts',
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service   File 
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 67, in 
getter
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service 
self.obj_load_attr(name)
  2018-01-29 10:28:48.664 93

[Yahoo-eng-team] [Bug 1745977] Re: When source compute service up, will not destroy and clean up those instances which be evacuated then be deleted.

2018-02-19 Thread Matt Riedemann
** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Changed in: nova
   Importance: Undecided => Medium

** Changed in: nova/ocata
   Status: New => Confirmed

** Changed in: nova/pike
   Status: New => Confirmed

** Changed in: nova/queens
   Status: New => Confirmed

** Changed in: nova/queens
   Importance: Undecided => Medium

** Changed in: nova/pike
   Importance: Undecided => Medium

** Changed in: nova/ocata
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1745977

Title:
  When source compute service up, will not destroy and clean up those
  instances which be evacuated then be deleted.

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) ocata series:
  In Progress
Status in OpenStack Compute (nova) pike series:
  In Progress
Status in OpenStack Compute (nova) queens series:
  In Progress

Bug description:
  Description
  ===
  When the instance evacuated to destination host successfully, then deleted 
this instance. The source host up will cleanup this instance failed.

  Steps to reproduce
  ==
  1.deploy a local instance in source host.
  2.power off the source host.
  3.evacuate the instance to destination host.
  4.delete this instance.
  5.power on the source host.

  Expected result
  ===
  The source host nova-compute service cleanup this evacuated and deleted 
instance.

  Actual result
  =
  This instance still on source host.

  Environment
  ===
  Openstack Pike
  Libvirt + KVM
  ovs network

  
  Logs & Configs
  ==
  source host nova-compute log:

  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service 
[req-7bdfe28f-0464-4af8-bdd0-2d433b25d84a - - - - -] Error starting thread.: 
InstanceNotFound_Remote: Instance 19022200-7abc-423d-90bd-e9dcd0887679 could 
not be found.
  Traceback (most recent call last):

File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 
125, in _object_dispatch
  return getattr(target, method)(*args, **kwargs)

File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 
184, in wrapper
  result = fn(cls, context, *args, **kwargs)

File "/usr/lib/python2.7/site-packages/nova/objects/instance.py", line 474, 
in get_by_uuid
  use_slave=use_slave)

File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 
235, in wrapper
  return f(*args, **kwargs)

File "/usr/lib/python2.7/site-packages/nova/objects/instance.py", line 466, 
in _db_instance_get_by_uuid
  columns_to_join=columns_to_join)

File "/usr/lib/python2.7/site-packages/nova/db/api.py", line 744, in 
instance_get_by_uuid
  return IMPL.instance_get_by_uuid(context, uuid, columns_to_join)

File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 
179, in wrapper
  return f(*args, **kwargs)

File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 
280, in wrapped
  return f(context, *args, **kwargs)

File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 
1911, in instance_get_by_uuid
  columns_to_join=columns_to_join)

File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 
1920, in _instance_get_by_uuid
  raise exception.InstanceNotFound(instance_id=uuid)

  InstanceNotFound: Instance 19022200-7abc-423d-90bd-e9dcd0887679 could not be 
found.
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service Traceback (most 
recent call last):
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service   File 
"/usr/lib/python2.7/site-packages/oslo_service/service.py", line 721, in 
run_service
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service service.start()
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service   File 
"/usr/lib/python2.7/site-packages/nova/service.py", line 156, in start
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service 
self.manager.init_host()
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1173, in 
init_host
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service 
self._destroy_evacuated_instances(context)
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 691, in 
_destroy_evacuated_instances
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service bdi, 
destroy_disks)
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 909, in 
destroy
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service destroy_disks)
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/dri

[Yahoo-eng-team] [Bug 1750450] [NEW] ironic: n-cpu fails to recover after losing connection to ironic-api and placement-api

2018-02-19 Thread Jim Rollenhagen
Public bug reported:

The ironic virt driver does some crazy things when the ironic API goes
down - it returns [] from get_available_nodes(). When the resource
tracker sees this, it immediately attempts to delete all of the compute
node records and resource providers for said nodes.

If placement is also down at this time, the resource providers will not
be properly deleted.

When ironic-api and placement-api return, nova will see nodes, create
compute_node records for them, and try to create new resource providers
(as they are new compute_node records). This will fail with a name
conflict, and the nodes will be unusable.

This is easy to fix, by raising an exception in get_available_nodes,
instead of lying to the resource tracker and returning []. However, this
causes nova-compute to fail to start if ironic-api is not available.

This may be fine but should have a larger discussion. We've added these
hacks over the years for some reason, we should look at the bigger
picture and decide how we want to handle these cases.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1750450

Title:
  ironic: n-cpu fails to recover after losing connection to ironic-api
  and placement-api

Status in OpenStack Compute (nova):
  New

Bug description:
  The ironic virt driver does some crazy things when the ironic API goes
  down - it returns [] from get_available_nodes(). When the resource
  tracker sees this, it immediately attempts to delete all of the
  compute node records and resource providers for said nodes.

  If placement is also down at this time, the resource providers will
  not be properly deleted.

  When ironic-api and placement-api return, nova will see nodes, create
  compute_node records for them, and try to create new resource
  providers (as they are new compute_node records). This will fail with
  a name conflict, and the nodes will be unusable.

  This is easy to fix, by raising an exception in get_available_nodes,
  instead of lying to the resource tracker and returning []. However,
  this causes nova-compute to fail to start if ironic-api is not
  available.

  This may be fine but should have a larger discussion. We've added
  these hacks over the years for some reason, we should look at the
  bigger picture and decide how we want to handle these cases.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1750450/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1746209] Re: Ironic virt driver node cache may be missing required fields

2018-02-19 Thread Matt Riedemann
** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Changed in: nova/queens
   Status: New => Confirmed

** Changed in: nova/queens
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1746209

Title:
  Ironic virt driver node cache may be missing required fields

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) queens series:
  Confirmed

Bug description:
  Per the discussion in [1], the ironic nodes added to the node cache in
  the ironic virt driver may be missing the required field
  resource_class, as this field is not in _NODE_FIELDS. In practice,
  this is typically not an issue (possibly never), as the normal code
  path uses a detailed list to sync all ironic nodes, which contain all
  fields (including resource_class). However, some code paths use a
  single node query with the fields limited to _NODE_FIELDS, so this
  should be changed to include the required resource_class.

  There are a number of other minor related issues picked up in that
  discussion, which don't really deserve their own bugs:

  * Filter the node list in _refresh_cache using _NODE_FIELDS.
  * Improve unit tests to use representative filtered node objects.
  * Remove _parse_node_instance_info and associated tests.

  [1]
  https://review.openstack.org/#/c/532288/9/nova/virt/ironic/driver.py@79

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1746209/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1745977] Re: When source compute service up, will not destroy and clean up those instances which be evacuated then be deleted.

2018-02-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/543970
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=6ba8a35825a7ec839b2d0aab7559351d573130ab
Submitter: Zuul
Branch:master

commit 6ba8a35825a7ec839b2d0aab7559351d573130ab
Author: Dan Smith 
Date:   Tue Feb 13 07:16:57 2018 -0800

Lazy-load instance attributes with read_deleted=yes

If we're doing a lazy-load of a generic attribute on instance, we
should be using read_deleted=yes. Otherwise we just fail in the load
process which is confusing and not helpful to a cleanup routine that
needs to handle the deleted instance. This makes us load those things
with read_deleted=yes.

Change-Id: Ide6cc5bb1fce2c9aea9fa3efdf940e8308cd9ed0
Closes-Bug: #1745977


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1745977

Title:
  When source compute service up, will not destroy and clean up those
  instances which be evacuated then be deleted.

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  In Progress
Status in OpenStack Compute (nova) pike series:
  In Progress
Status in OpenStack Compute (nova) queens series:
  In Progress

Bug description:
  Description
  ===
  When the instance evacuated to destination host successfully, then deleted 
this instance. The source host up will cleanup this instance failed.

  Steps to reproduce
  ==
  1.deploy a local instance in source host.
  2.power off the source host.
  3.evacuate the instance to destination host.
  4.delete this instance.
  5.power on the source host.

  Expected result
  ===
  The source host nova-compute service cleanup this evacuated and deleted 
instance.

  Actual result
  =
  This instance still on source host.

  Environment
  ===
  Openstack Pike
  Libvirt + KVM
  ovs network

  
  Logs & Configs
  ==
  source host nova-compute log:

  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service 
[req-7bdfe28f-0464-4af8-bdd0-2d433b25d84a - - - - -] Error starting thread.: 
InstanceNotFound_Remote: Instance 19022200-7abc-423d-90bd-e9dcd0887679 could 
not be found.
  Traceback (most recent call last):

File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 
125, in _object_dispatch
  return getattr(target, method)(*args, **kwargs)

File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 
184, in wrapper
  result = fn(cls, context, *args, **kwargs)

File "/usr/lib/python2.7/site-packages/nova/objects/instance.py", line 474, 
in get_by_uuid
  use_slave=use_slave)

File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 
235, in wrapper
  return f(*args, **kwargs)

File "/usr/lib/python2.7/site-packages/nova/objects/instance.py", line 466, 
in _db_instance_get_by_uuid
  columns_to_join=columns_to_join)

File "/usr/lib/python2.7/site-packages/nova/db/api.py", line 744, in 
instance_get_by_uuid
  return IMPL.instance_get_by_uuid(context, uuid, columns_to_join)

File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 
179, in wrapper
  return f(*args, **kwargs)

File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 
280, in wrapped
  return f(context, *args, **kwargs)

File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 
1911, in instance_get_by_uuid
  columns_to_join=columns_to_join)

File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 
1920, in _instance_get_by_uuid
  raise exception.InstanceNotFound(instance_id=uuid)

  InstanceNotFound: Instance 19022200-7abc-423d-90bd-e9dcd0887679 could not be 
found.
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service Traceback (most 
recent call last):
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service   File 
"/usr/lib/python2.7/site-packages/oslo_service/service.py", line 721, in 
run_service
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service service.start()
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service   File 
"/usr/lib/python2.7/site-packages/nova/service.py", line 156, in start
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service 
self.manager.init_host()
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1173, in 
init_host
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service 
self._destroy_evacuated_instances(context)
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 691, in 
_destroy_evacuated_instances
  2018-01-29 10:28:48.664 9364 ERROR oslo_service.service bdi, 
destroy_disks)
  2018-01-2

[Yahoo-eng-team] [Bug 1750355] Re: nova.tests.unit.test_api_validation.PatternPropertiesTestCase.test_validate_patternProperties_fails fails in 3.6 because py3 check is limited to 3.5

2018-02-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/545798
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=3017531c82f611b5572079753f7ea3c74c3dd71e
Submitter: Zuul
Branch:master

commit 3017531c82f611b5572079753f7ea3c74c3dd71e
Author: Chris Dent 
Date:   Mon Feb 19 11:06:45 2018 +

Fix PatternPropertiesTestCase for py 3.6

A python version check was only checking for 3.5. As noted
in the pre-existing comment, an exception message is not a
particularly stable interface so specific-to-minor-version
check is maintained.

Change-Id: I441b90e911fbac033e8cdea96114db22cba96ac5
Closes-Bug: #1750355


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1750355

Title:
  
nova.tests.unit.test_api_validation.PatternPropertiesTestCase.test_validate_patternProperties_fails
  fails in 3.6 because py3 check is limited to 3.5

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  
  
nova.tests.unit.test_api_validation.PatternPropertiesTestCase.test_validate_patternProperties_fails
 fails in 3.6 because py3 check is limited to 3.5 with:

  ```
  Captured traceback:
  ~~~
  b'Traceback (most recent call last):'
  b'  File "/Users/cdent/src/nova/nova/api/validation/validators.py", line 
300, in validate'
  b'self.validator.validate(*args, **kwargs)'
  b'  File 
"/Users/cdent/src/nova/.tox/py36/lib/python3.6/site-packages/jsonschema/validators.py",
 line 129, in validate'
  b'for error in self.iter_errors(*args, **kwargs):'
  b'  File 
"/Users/cdent/src/nova/.tox/py36/lib/python3.6/site-packages/jsonschema/validators.py",
 line 105, in iter_errors'
  b'for error in errors:'
  b'  File 
"/Users/cdent/src/nova/.tox/py36/lib/python3.6/site-packages/jsonschema/_validators.py",
 line 14, in patternProperties'
  b'if re.search(pattern, k):'
  b'  File "/Users/cdent/src/nova/.tox/py36/lib/python3.6/re.py", line 182, 
in search'
  b'return _compile(pattern, flags).search(string)'
  b'TypeError: expected string or bytes-like object'
  ```

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1750355/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1746209] Re: Ironic virt driver node cache may be missing required fields

2018-02-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/539506
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=5895566a428be4c30c31ae94070282566a6cc568
Submitter: Zuul
Branch:master

commit 5895566a428be4c30c31ae94070282566a6cc568
Author: Mark Goddard 
Date:   Wed Jan 31 11:11:32 2018 +

Add resource_class to fields in ironic node cache

Per the discussion in [1], the ironic nodes added to the node cache in
the ironic virt driver may be missing the required field resource_class,
as this field is not in _NODE_FIELDS. In practice, this is typically
not an issue (possibly never), as the normal code path uses a
detailed list to sync all ironic nodes, which contain all fields
(including resource_class). However, some code paths use a single
node query with the fields limited to _NODE_FIELDS, so could result in a
node in the cache without a resource_class.

This change adds resource_class to _NODE_FIELDS.

[1]
https://review.openstack.org/#/c/532288/9/nova/virt/ironic/driver.py@79

Change-Id: Id84b4a47d05532d341a9b6ca2de7e9e66e1930da
Closes-Bug: #1746209


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1746209

Title:
  Ironic virt driver node cache may be missing required fields

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) queens series:
  Confirmed

Bug description:
  Per the discussion in [1], the ironic nodes added to the node cache in
  the ironic virt driver may be missing the required field
  resource_class, as this field is not in _NODE_FIELDS. In practice,
  this is typically not an issue (possibly never), as the normal code
  path uses a detailed list to sync all ironic nodes, which contain all
  fields (including resource_class). However, some code paths use a
  single node query with the fields limited to _NODE_FIELDS, so this
  should be changed to include the required resource_class.

  There are a number of other minor related issues picked up in that
  discussion, which don't really deserve their own bugs:

  * Filter the node list in _refresh_cache using _NODE_FIELDS.
  * Improve unit tests to use representative filtered node objects.
  * Remove _parse_node_instance_info and associated tests.

  [1]
  https://review.openstack.org/#/c/532288/9/nova/virt/ironic/driver.py@79

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1746209/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1745360] Re: DB Upgrade: Validation missing to check if E/M already executed

2018-02-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/540736
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=5a5762b71c978312d2b00c1cbc6b382fe35ee30e
Submitter: Zuul
Branch:master

commit 5a5762b71c978312d2b00c1cbc6b382fe35ee30e
Author: shilpa.devharakar 
Date:   Fri Feb 9 11:58:07 2018 +0530

Add validation to check if E-M-C is already in sync

If you run expand and migrate commands for the second time,
it should return a user friendly message instead of attempting
to upgrade db again.

Added a check to confirm if expand and migrate are already in
sync and return a user friendly message.

Closes-Bug: #1745360
Change-Id: Iaf2e8ae2004db03f9b7498a2c498360fec096066


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1745360

Title:
  DB Upgrade: Validation missing to check if E/M already executed

Status in Glance:
  Fix Released

Bug description:
  Description
  ===
  There is no any validation present to check if Database is already 
expanded/migrated or contracted.
  There has to be validation check so that system should not undergo in 
re-running expand/migrate or contract scripts if already executed.
  Re-running same scripts causes Internal Errors so we should restrict 
reprocessing expand/migrate or contract scripts.

  Steps to reproduce
  ==
  Let’s say if expand is run, and column is already altered, re-altering the 
same will throw Internal Error.
  If you try to run 'glance-manage db_sync expand' command while upgrading from 
ocata to pike then it fails with below error:
   INFO [alembic.runtime.migration] Running upgrade mitaka02 -> ocata_expand01, 
add visibility to images
   DBError: (pymysql.err.InternalError) (1060, u"Duplicate column name 
'visibility'") [SQL: u"ALTER TABLE images ADD COLUMN visibility 
ENUM('private','public','shared','community')"]
  Here ocata_expand01 is already processed, and system trying to re run the 
same and results into issue.

  Expected result
  ===
  If expand/migrate is already executed, appropriate message should be 
delivered to operator.

  Actual result
  =
  On blank DB verified with queens empty scripts, below are the results:

  EXPAND >> If we run glance-manage db expand, expand will execute with below 
message
   Upgraded database to: queens_expand01, current revision(s): queens_expand01
  If we again run glance-manage db expand, expand will again exeute with below 
message
   Upgraded database to: queens_expand01, current revision(s): queens_expand01

  MIGRATE >> Then we proceed with migrate, glance-manage db migrate, it will 
execute with below message
   Migrated 0 rows  Since no pending migrations.
  If we again run glance-manage db migrate, migrate will again exeute with 
below message
   Migrated 0 rows  Since no pending migrations.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1745360/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1728665] Re: Removing gateway ip for tenant network (DVR) causes traceback in neutron-openvswitch-agent

2018-02-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/521199
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=9be7b62f773d3f61da57c151bfbd5c8fe4d4e863
Submitter: Zuul
Branch:master

commit 9be7b62f773d3f61da57c151bfbd5c8fe4d4e863
Author: Brian Haley 
Date:   Fri Nov 17 16:53:41 2017 -0500

DVR: verify subnet has gateway_ip before installing IPv4 flow

If a user clears the gateway_ip of a subnet and the OVS
agent is re-started, it will throw an exception trying
to install the DVR IPv4 flow.  Do not install the flow
in this case since it is not required.

Change-Id: I79aba63498aa9af1156e37530627fcaec853a740
Closes-bug: #1728665


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1728665

Title:
  Removing gateway ip for tenant network (DVR) causes traceback in
  neutron-openvswitch-agent

Status in neutron:
  Fix Released

Bug description:
  Version: OpenStack Newton (OSA v14.2.11)
  neutron-openvswitch-agent version 9.4.2.dev21

  Issue:

  Users complained that instances were unable to procure their IP via
  DHCP. On the controllers, numerous ports were found in BUILD state.
  Tracebacks similar to the following could be observed in the neutron-
  openvswitch-agent logs across the (3) controllers.

  2017-10-26 16:24:28.458 4403 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] Port 
e9c11103-9d10-4b27-b739-e428773d8fac updated. Details: {u'profile': {}, 
u'network_qos_policy_id': None, u'qos_policy_id': None, 
u'allowed_address_pairs': [], u'admin_state_up': True, u'network_id': 
u'e57257d9-f915-4c60-ac30-76b0e2d36378', u'segmentation_id': 2123, 
u'device_owner': u'network:dhcp', u'physical_network': u'physnet1', 
u'mac_address': u'fa:16:3e:af:aa:f5', u'device': 
u'e9c11103-9d10-4b27-b739-e428773d8fac', u'port_security_enabled': False, 
u'port_id': u'e9c11103-9d10-4b27-b739-e428773d8fac', u'fixed_ips': 
[{u'subnet_id': u'b7196c99-0df6-4b0e-bbfa-e62da96dac86', u'ip_address': 
u'10.1.1.32'}], u'network_type': u'vlan'}
  2017-10-26 16:24:28.458 4403 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] Assigning 48 as local vlan 
for net-id=e57257d9-f915-4c60-ac30-76b0e2d36378
  2017-10-26 16:24:28.462 4403 INFO neutron.agent.l2.extensions.qos 
[req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] QoS extension did have no 
information about the port e9c11103-9d10-4b27-b739-e428773d8fac that we were 
trying to reset
  2017-10-26 16:24:28.462 4403 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] Port 
610c3924-5e94-4f95-b19b-75e43c5729ff updated. Details: {u'profile': {}, 
u'network_qos_policy_id': None, u'qos_policy_id': None, 
u'allowed_address_pairs': [], u'admin_state_up': True, u'network_id': 
u'f09a8be9-a7c7-4f90-8cb3-d08b61095c25', u'segmentation_id': 5, 
u'device_owner': u'network:router_gateway', u'physical_network': u'physnet1', 
u'mac_address': u'fa:16:3e:bf:39:43', u'device': 
u'610c3924-5e94-4f95-b19b-75e43c5729ff', u'port_security_enabled': False, 
u'port_id': u'610c3924-5e94-4f95-b19b-75e43c5729ff', u'fixed_ips': 
[{u'subnet_id': u'3ce21ed4-bb6a-4e67-b222-a055df40af08', u'ip_address': 
u'96.116.48.132'}], u'network_type': u'vlan'}
  2017-10-26 16:24:28.463 4403 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] Assigning 43 as local vlan 
for net-id=f09a8be9-a7c7-4f90-8cb3-d08b61095c25
  2017-10-26 16:24:28.466 4403 INFO neutron.agent.l2.extensions.qos 
[req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] QoS extension did have no 
information about the port 610c3924-5e94-4f95-b19b-75e43c5729ff that we were 
trying to reset
  2017-10-26 16:24:28.467 4403 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] Port 
66db7e2d-bd92-48ea-85fa-5e20dfc5311c updated. Details: {u'profile': {}, 
u'network_qos_policy_id': None, u'qos_policy_id': None, 
u'allowed_address_pairs': [], u'admin_state_up': True, u'network_id': 
u'fd67eae2-9db7-4f7c-a622-39be67090cb4', u'segmentation_id': 2170, 
u'device_owner': u'network:dhcp', u'physical_network': u'physnet1', 
u'mac_address': u'fa:16:3e:c9:24:8a', u'device': 
u'66db7e2d-bd92-48ea-85fa-5e20dfc5311c', u'port_security_enabled': False, 
u'port_id': u'66db7e2d-bd92-48ea-85fa-5e20dfc5311c', u'fixed_ips': 
[{u'subnet_id': u'47366a54-22ca-47a2-b7a0-987257fa83ea', u'ip_address': 
u'192.168.189.3'}], u'network_type': u'vlan'}
  2017-10-26 16:24:28.467 4403 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] Assigning 54 as loca

[Yahoo-eng-team] [Bug 1722367] Re: Documentation for dns integration needs improvement

2018-02-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/541712
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=f305559292e6ecfd35740268e69b10cf99089fb2
Submitter: Zuul
Branch:master

commit f305559292e6ecfd35740268e69b10cf99089fb2
Author: Jens Harbott 
Date:   Wed Feb 7 12:56:12 2018 +

Update documentation for DNS integration

- Split documentation for external DNS integration into a new document
- Update configs to current standards
- Remove use of old designate client

Change-Id: I7a50ad72e35e2c01f874b872ddeff1aa8bfe3424
Closes-Bug: 1722367
Related-Bug: 1725630


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1722367

Title:
  Documentation for dns integration needs improvement

Status in neutron:
  Fix Released

Bug description:
  The documentation of neutron dns integration needs some improvement in
  order to avoid common pitfall during deployment.

  [ORIGINAL DESCRIPTION]
  The problem:

  Upon instance/port deletion the following error is received and the
  instance/port comes into ERROR state. The instance/port is deleted
  successfully after second retry:

  
  2017-10-09 12:46:52.555 39624 ERROR neutron.callbacks.manager 
[req-70d6ae09-694a-4ba7-8189-f99159e71fc0 bc39ed40eefa4bd39e91ef35c5e48772 
9e1b0975ef23425d9f519ff1b97cdef1 - - -] Callback 
neutron.plugins.ml2.extensions.dns_integration._delete_port_in_external_dns_service--9223363296916797971
 raised Expecting to find domain in project. The server could not comply with 
the request since it is either malformed or otherwise incorrect. The client is 
assumed to be in error. (HTTP 400) (Request-ID: 
req-f5476d34-df91-41e8-be95-b481dc6d68f0)
  2017-10-09 12:46:52.605 39617 INFO neutron.wsgi 
[req-4deac1db-6401-43ff-a7c9-ef7e26b3a24d 2cccfff294fc42a397be3c5202401037 
5cc5d6cd841d4662b809cb883f4a0a8a - - -] 10.255.3.3 - - [09/Oct/2017 12:46:52] 
"GET 
/v2.0/ports.json?network_id=7e666b30-14d6-492c-893b-85cffa6a8e9f&device_owner=network%3Adhcp
 HTTP/1.1" 200 2437 0.071344
  2017-10-09 12:46:52.611 39624 ERROR neutron.api.v2.resource 
[req-70d6ae09-694a-4ba7-8189-f99159e71fc0 bc39ed40eefa4bd39e91ef35c5e48772 
9e1b0975ef23425d9f519ff1b97cdef1 - - -] delete failed: No details.
  2017-10-09 12:46:52.611 39624 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
  2017-10-09 12:46:52.611 39624 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-15.1.9/lib/python2.7/site-packages/neutron/api/v2/resource.py",
 line 93, in resource
  2017-10-09 12:46:52.611 39624 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
  2017-10-09 12:46:52.611 39624 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-15.1.9/lib/python2.7/site-packages/neutron/api/v2/base.py",
 line 562, in delete
  2017-10-09 12:46:52.611 39624 ERROR neutron.api.v2.resource return 
self._delete(request, id, **kwargs)
  2017-10-09 12:46:52.611 39624 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-15.1.9/lib/python2.7/site-packages/neutron/db/api.py",
 line 95, in wrapped
  2017-10-09 12:46:52.611 39624 ERROR neutron.api.v2.resource setattr(e, 
'_RETRY_EXCEEDED', True)
  2017-10-09 12:46:52.611 39624 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-15.1.9/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
  2017-10-09 12:46:52.611 39624 ERROR neutron.api.v2.resource 
self.force_reraise()
  2017-10-09 12:46:52.611 39624 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-15.1.9/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
  2017-10-09 12:46:52.611 39624 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2017-10-09 12:46:52.611 39624 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-15.1.9/lib/python2.7/site-packages/neutron/db/api.py",
 line 91, in wrapped
  2017-10-09 12:46:52.611 39624 ERROR neutron.api.v2.resource return 
f(*args, **kwargs)
  2017-10-09 12:46:52.611 39624 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-15.1.9/lib/python2.7/site-packages/oslo_db/api.py", 
line 151, in wrapper
  2017-10-09 12:46:52.611 39624 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2017-10-09 12:46:52.611 39624 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-15.1.9/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
  2017-10-09 12:46:52.611 39624 ERROR neutron.api.v2.resource 
self.force_reraise()
  2017-10-09 12:46:52.611 39624 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-15.1.9/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
  2017-10-09 12:46:52.611 39624 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, sel

[Yahoo-eng-team] [Bug 1730845] Re: [RFE] support a port-behind-port API

2018-02-19 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1730845

Title:
  [RFE] support a port-behind-port API

Status in neutron:
  Expired

Bug description:
  This RFE requests a unified API for a port-behind-port behaviour. This
  behaviour has a few use-cases:

  * MACVLAN - Identify that a port is behind a port using Allowed
  Address Pairs, and identifying the behaviour based on MAC.

  * HA Proxy behind Amphora - Identify that a port is behind a port
  using Allowed Address Pairs and identifying the behaviour based on IP.

  * Trunk Port (VLAN aware VMs) - Identify that a port is behind a port
  using the Trunk Port API and identifying the behaviour based on VLAN
  tags.

  This RFE proposes to extend the Trunk Port API to support the first
  two use-cases. The rationale is that in an SDN environment, it makes
  more sense to explicitly state the intent, rather than have the
  implementation infer the intent by matching Allowed Address Pairs and
  other existing ports.

  This will allow implementations to handle these use cases in a
  simpler, flexible, and more robust manner than done today.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1730845/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750415] Re: validation of app cred tokens is dependent on CONF.token.cache_on_issue

2018-02-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/545971
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=796198f19670e3eb899ca3b1db5d2a21a4127a30
Submitter: Zuul
Branch:master

commit 796198f19670e3eb899ca3b1db5d2a21a4127a30
Author: Lance Bragstad 
Date:   Mon Feb 19 18:23:25 2018 +

Populate application credential data in token

Without this patch, the token formatter does not have enough data to
construct a token created with an application credential. This means
that if the token cache is disabled or expired, when keystone goes to
create the token it will not find any application credential information
and will not recreate the application_credential_restricted parameter in
the token data. This patch creates a new Payload class for application
credentials so that the application credential ID is properly persisted
in the msgpack'd payload. It also adds more data to the token data
object so that the application credential ID and name as well as its
restricted status is available when the token is queried.

Co-authored-by: Lance Bragstad 

Change-Id: I322a40404d8287748fe8c3a8d6dc1256d935d84a
Closes-bug: #1750415


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1750415

Title:
  validation of app cred tokens is dependent on
  CONF.token.cache_on_issue

Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) queens series:
  Triaged
Status in OpenStack Identity (keystone) rocky series:
  Fix Released

Bug description:
  Some information in tokens obtained with application credentials isn't
  available unless caching is enabled. I was able to recreate this using
  some of the tests in test_v3_trust.py and by setting
  CONF.token.cache_on_issue to False, which resulted in a 500 because a
  specific key in the token reference wasn't available [0].

  Without digging into a bunch, I think this is because the token is
  cached when it is created, meaning the process to rebuild the entire
  authorization context at validation time is short-circuited.

  [0] http://paste.openstack.org/show/677666/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1750415/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259292] Re: Some tests use assertEqual(observed, expected) , the argument order is wrong

2018-02-19 Thread Nguyen Hai
** Changed in: tacker
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1259292

Title:
  Some tests use assertEqual(observed, expected) , the argument order is
  wrong

Status in Astara:
  Fix Released
Status in Bandit:
  Fix Released
Status in Barbican:
  Fix Released
Status in Blazar:
  Triaged
Status in Ceilometer:
  Invalid
Status in Cinder:
  Fix Released
Status in congress:
  Fix Released
Status in daisycloud-core:
  New
Status in Designate:
  Fix Released
Status in OpenStack Backup/Restore and DR (Freezer):
  In Progress
Status in Glance:
  Fix Released
Status in glance_store:
  Fix Released
Status in Higgins:
  New
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Magnum:
  Fix Released
Status in Manila:
  Fix Released
Status in Mistral:
  Fix Released
Status in Murano:
  Fix Released
Status in networking-calico:
  Fix Released
Status in networking-infoblox:
  In Progress
Status in networking-l2gw:
  Fix Released
Status in networking-sfc:
  Fix Released
Status in quark:
  In Progress
Status in OpenStack Compute (nova):
  Won't Fix
Status in os-brick:
  Fix Released
Status in PBR:
  Fix Released
Status in pycadf:
  Fix Released
Status in python-barbicanclient:
  Fix Released
Status in python-ceilometerclient:
  Invalid
Status in python-cinderclient:
  Fix Released
Status in python-designateclient:
  Fix Committed
Status in Glance Client:
  Fix Released
Status in python-mistralclient:
  Fix Released
Status in python-solumclient:
  Fix Released
Status in Python client library for Zaqar:
  Fix Released
Status in Rally:
  In Progress
Status in Sahara:
  Fix Released
Status in Solum:
  Fix Released
Status in sqlalchemy-migrate:
  In Progress
Status in SWIFT:
  In Progress
Status in tacker:
  Fix Released
Status in tempest:
  Invalid
Status in zaqar:
  Fix Released

Bug description:
  The test cases will produce a confusing error message if the tests
  ever fail, so this is worth fixing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/astara/+bug/1259292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1517839] Re: Make CONF.set_override with parameter enforce_type=True by default

2018-02-19 Thread Nguyen Hai
** Changed in: tacker
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1517839

Title:
  Make CONF.set_override with parameter enforce_type=True by default

Status in Cinder:
  In Progress
Status in cloudkitty:
  Fix Released
Status in Designate:
  Fix Released
Status in OpenStack Backup/Restore and DR (Freezer):
  In Progress
Status in Glance:
  Invalid
Status in OpenStack Heat:
  Fix Released
Status in Ironic:
  Fix Released
Status in Karbor:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in kolla:
  Confirmed
Status in Magnum:
  In Progress
Status in Manila:
  Fix Released
Status in Murano:
  Fix Released
Status in neutron:
  Won't Fix
Status in OpenStack Compute (nova):
  Fix Released
Status in octavia:
  New
Status in oslo.config:
  Fix Released
Status in oslo.messaging:
  Fix Released
Status in Quark: Money Reinvented:
  New
Status in Rally:
  Fix Released
Status in senlin:
  Fix Released
Status in tacker:
  Fix Released
Status in tempest:
  Fix Released
Status in watcher:
  Fix Released

Bug description:
  1. Problems :
     oslo_config provides method CONF.set_override[1] , developers usually use 
it to change config option's value in tests. That's convenient .
     By default  parameter enforce_type=False,  it doesn't check any type or 
value of override. If set enforce_type=True , will check parameter
     override's type and value.  In production code(running time code),  
oslo_config  always checks  config option's value.
     In short, we test and run code in different ways. so there's  gap:  config 
option with wrong type or invalid value can pass tests when
     parameter enforce_type = False in consuming projects.  that means some 
invalid or wrong tests are in our code base.

     [1]
  https://github.com/openstack/oslo.config/blob/master/oslo_config/cfg.py#L2173

  2. Proposal
     1) Fix violations when enforce_type=True in each project.

    2) Make method CONF.set_override with  enforce_type=True by default
  in oslo_config

   You can find more details and comments  in
  https://etherpad.openstack.org/p/enforce_type_true_by_default

  3. How to find violations in your projects.

     1. Run tox -e py27

     2. then modify oslo.config with enforce_type=True
    cd .tox/py27/lib64/python2.7/site-packages/oslo_config
    edit cfg.py with enforce_type=True

  -def set_override(self, name, override, group=None, enforce_type=False):
  +def set_override(self, name, override, group=None, enforce_type=True):

    3. Run tox -e py27 again, you will find violations.

  
  The current state is that oslo.config make enforce_type as True by default 
and deprecate this parameter, will remove it in the future, the current work
  is that remove usage of enforce_type in consuming projects. We can list the
  usage of it in 
http://codesearch.openstack.org/?q=enforce_type&i=nope&files=&repos=

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1517839/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1670464] Re: Downloading image with --progress fails with "RequestIdProxy object is not an iterator"

2018-02-19 Thread Abhishek Kekane
** Changed in: glance
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1670464

Title:
  Downloading image with --progress fails with "RequestIdProxy object is
  not an iterator"

Status in Glance:
  Fix Released

Bug description:
  I'm seeing this on a recent devstack.  Without --progress it seems to
  work fine.

  [bnemec@Arisu ~]$ glance -d image-download 
2974158b-383d-4fe6-9671-5248b9a5d07d --file bmc-base.qcow2 --progress
  DEBUG:keystoneauth.session:REQ: curl -g -i -X GET http://11.1.1.78:5000/v3 -H 
"Accept: application/json" -H "User-Agent: glance keystoneauth1/2.18.0 
python-requests/2.12.5 CPython/2.7.13"
  DEBUG:requests.packages.urllib3.connectionpool:Starting new HTTP connection 
(1): 11.1.1.78
  DEBUG:requests.packages.urllib3.connectionpool:http://11.1.1.78:5000 "GET /v3 
HTTP/1.1" 200 252
  DEBUG:keystoneauth.session:RESP: [200] Date: Mon, 06 Mar 2017 18:37:02 GMT 
Server: Apache/2.4.25 (Fedora) OpenSSL/1.0.2k-fips mod_wsgi/4.4.23 
Python/2.7.13 Content-Length: 252 Vary: X-Auth-Token x-openstack-request-id: 
req-1fdae4f6-857c-4032-a4fb-1ecd08d1e90d Keep-Alive: timeout=5, max=100 
Connection: Keep-Alive Content-Type: application/json 
  RESP BODY: {"version": {"status": "stable", "updated": 
"2017-02-22T00:00:00Z", "media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v3+json"}], "id": "v3.8", "links": 
[{"href": "http://11.1.1.78/identity/v3/";, "rel": "self"}]}}

  DEBUG:keystoneauth.session:GET call to None for http://11.1.1.78:5000/v3 used 
request id req-1fdae4f6-857c-4032-a4fb-1ecd08d1e90d
  DEBUG:keystoneauth.identity.v3.base:Making authentication request to 
http://11.1.1.78/identity/v3/auth/tokens
  DEBUG:requests.packages.urllib3.connectionpool:Starting new HTTP connection 
(1): 11.1.1.78
  DEBUG:requests.packages.urllib3.connectionpool:http://11.1.1.78:80 "POST 
/identity/v3/auth/tokens HTTP/1.1" 201 3438
  DEBUG:keystoneauth.identity.v3.base:{"token": {"is_domain": false, "methods": 
["password"], "roles": [{"id": "fcd99b843dfb4deca4f0fd5096360c22", "name": 
"admin"}], "expires_at": "2017-06-30T12:23:41.00Z", "project": {"domain": 
{"id": "default", "name": "Default"}, "id": "45c03ec02978498db7a12de812cc4b18", 
"name": "admin"}, "catalog": [{"endpoints": [{"url": 
"http://11.1.1.78:8774/v2/45c03ec02978498db7a12de812cc4b18";, "interface": 
"public", "region": "RegionOne", "region_id": "RegionOne", "id": 
"5c473dd798dd4206b529cfce030bfef2"}], "type": "compute_legacy", "id": 
"18a8bab02dd74eebb1b6308b15bf461a", "name": "nova_legacy"}, {"endpoints": 
[{"url": "http://11.1.1.78:8004/v1/45c03ec02978498db7a12de812cc4b18";, 
"interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": 
"631a5450741443d4a94128aaa159b5e9"}, {"url": 
"http://11.1.1.78:8004/v1/45c03ec02978498db7a12de812cc4b18";, "interface": 
"admin", "region": "RegionOne", "region_id": "RegionOne", "id": 
"a83e3e461d0e442fa8e2464f06dd535f"}, {"url": 
"http://11.1.1.78:8004/v1/45c03ec02978498db7a12de812cc4b18";, "interface": 
"internal", "region": "RegionOne", "region_id": "RegionOne", "id": 
"d3a005ac8848461896d85ea0ed8323a6"}], "type": "orchestration", "id": 
"207ce855fbe7474db0a2b7d5fc0e9c9f", "name": "heat"}, {"endpoints": [{"url": 
"http://11.1.1.78:8000/v1";, "interface": "public", "region": "RegionOne", 
"region_id": "RegionOne", "id": "0c2a56c5c6c648ada5698536e1d6b850"}, {"url": 
"http://11.1.1.78:8000/v1";, "interface": "internal", "region": "RegionOne", 
"region_id": "RegionOne", "id": "12e59772754e41149f3be25805ea11c9"}, {"url": 
"http://11.1.1.78:8000/v1";, "interface": "admin", "region": "RegionOne", 
"region_id": "RegionOne", "id": "939d5746405c4571ace5cfc5d1fe5bdc"}], "type": 
"cloudformation", "id": "34ccb229f8e14a05bfe668094c927126", "name": 
"heat-cfn"}, {"endpoints": [{"url": "http://11.1.1.78/identity_admin";, 
"interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": 
"d924181ce443443aa425bb740249f09a"}, {"url": "http://11.1.1.78/identity";, 
"interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": 
"e38adbef3cb14649b775646d5f383942"}], "type": "identity", "id": 
"54ab5a716ae0465286a6da5ff78c5b0b", "name": "keystone"}, {"endpoints": [{"url": 
"http://11.1.1.78:8774/v2.1";, "interface": "public", "region": "RegionOne", 
"region_id": "RegionOne", "id": "5b12864e38ec4d8b88955e50e9ff9839"}], "type": 
"compute", "id": "78e758f7be8d44529a56e05a53a1355c", "name": "nova"}, 
{"endpoints": [{"url": "http://11.1.1.78/placement";, "interface": "public", 
"region": "RegionOne", "region_id": "RegionOne", "id": 
"eeea1c9001bd40a88042017221e81c1a"}], "type": "placement", "id": 
"b68c8f2103124c44988fb683a634ec94", "name": "placement"}, {"endpoints": 
[{"url": "http://11.1.1.78:9696/";, "interface": "public", "region": 
"RegionOne", "region_id": "RegionOne", "id": 
"554bfbe2bcde441