[Yahoo-eng-team] [Bug 1651327] Re: Different behavior in firewall_group creation and updation

2017-01-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/412754
Committed: 
https://git.openstack.org/cgit/openstack/neutron-fwaas/commit/?id=58417e5f434b2cb8feecfe8aa78b60d4de21693f
Submitter: Jenkins
Branch:master

commit 58417e5f434b2cb8feecfe8aa78b60d4de21693f
Author: ZhaoBo 
Date:   Tue Dec 20 10:44:07 2016 +0800

Fix PENDING_UPDATE state when update exist no policy fw_group with ports

This patch return the 'INVAILD' state which is the same with fw_group
creation when update the exist fw_group. The exist fw_group just contained
the ports.

Closes-Bug: #1651327
Change-Id: I64e1ed4d790f11cb321f32651bbdc57ff265cd68


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1651327

Title:
  Different behavior in firewall_group creation and updation

Status in neutron:
  Fix Released

Bug description:
  I use restApi to create firewall_group like:
  {
  "firewall_group": {
  "name": "ag1",
  "ports": ["2c6b1bcf-a1d6-4efa-8c7a-b7f0966aa3d1"]
  }
  }
  The response is :
{
"firewall_group": {
  "status": "INACTIVE",
  "public": false,
  "egress_firewall_policy_id": null,
  "name": "ag1",
  "admin_state_up": true,
  "tenant_id": "88ecb8bb6abb4207bb9a832e08eef245",
  "project_id": "88ecb8bb6abb4207bb9a832e08eef245",
  "id": "1f6ae5b9-0820-4572-9057-457ed139d7e6",
  "ingress_firewall_policy_id": null,
  "description": ""
}
  }
  This is correct, as no policy there is no meaning to call agent to refresh 
the iptables.

  But when I use the same req_body to PUT.
  PUT request:
  {
  "firewall_group": {
  "name": "ag1",
  "ports": ["2c6b1bcf-a1d6-4efa-8c7a-b7f0966aa3d1"],
  }
  }

  PUT response:
  {
"firewall_group": {
  "status": "PENDING_UPDATE",
  "description": "",
  "ingress_firewall_policy_id": null,
  "id": "034763aa-841d-4e3c-a327-b3430330cd98",
  "name": "ag1",
  "admin_state_up": true,
  "tenant_id": "88ecb8bb6abb4207bb9a832e08eef245",
  "ports": [
"2c6b1bcf-a1d6-4efa-8c7a-b7f0966aa3d1"
  ],
  "project_id": "88ecb8bb6abb4207bb9a832e08eef245",
  "public": false,
  "egress_firewall_policy_id": null
}
  }
  Then the logic will call agent to input the default iptables chains with no 
policy port.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1651327/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1638813] Re: CLI get-password got nothing

2017-01-13 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1638813

Title:
  CLI get-password got nothing

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Description
  ===
  After one instance created, failed to use cli 'nova get-passwod' for getting 
admin's password.

  Steps to reproduce
  ==
  * Config nova.conf
  [libvirt]
  inject_password=true
  inject_partition=-1
  * Create one instance
  # nova boot --flavor 1 --image cirros test
  '| adminPass| cU8bi4mB4TxC  '
  * Use CLI nova get-password test to get admin's password
  # nova get-password test

  Expected result
  ===
  Get password like 'cU8bi4mB4TxC'.

  Actual result
  =
  Get nothing.

  Environment
  ===
  1. nova and novaclient version
  # rpm -qa | grep nova
  openstack-nova-scheduler-13.1.0-1.el7.noarch
  openstack-nova-compute-13.1.0-1.el7.noarch
  openstack-nova-common-13.1.0-1.el7.noarch
  openstack-nova-conductor-13.1.0-1.el7.noarch
  python-nova-13.1.0-1.el7.noarch
  openstack-nova-api-13.1.0-1.el7.noarch
  python-novaclient-3.3.1-1.el7.noarch
  openstack-nova-console-13.1.0-1.el7.noarch
  openstack-nova-novncproxy-13.1.0-1.el7.noarch

  2. libvirt+KVM

  
  Logs & Configs
  ==
  [libvirt]
  inject_password=true
  inject_partition=-1

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1638813/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656482] [NEW] GET /resource_providers?member_of does not validate the value is a uuid

2017-01-13 Thread Matt Riedemann
Public bug reported:

The 1.3 microversion of the placement API adds a member_of query string
parameter to the /resource_providers handler and the values are meant to
be aggregate uuids, but the REST API handler code simply parses the
query string and passes the filter through to the DB API query code,
which is doing a simple aggregate.uuid IN [values] query. For something
that's not a uuid it's just going to result in no results and return an
empty list, but the REST API should be stricter about the actual
member_of values being uuids.

** Affects: nova
 Importance: Medium
 Assignee: Matt Riedemann (mriedem)
 Status: In Progress


** Tags: api placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1656482

Title:
  GET /resource_providers?member_of does not validate the value is a
  uuid

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The 1.3 microversion of the placement API adds a member_of query
  string parameter to the /resource_providers handler and the values are
  meant to be aggregate uuids, but the REST API handler code simply
  parses the query string and passes the filter through to the DB API
  query code, which is doing a simple aggregate.uuid IN [values] query.
  For something that's not a uuid it's just going to result in no
  results and return an empty list, but the REST API should be stricter
  about the actual member_of values being uuids.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1656482/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656479] Re: nova-manage cell_v2 verify_instance has unnecessary uuid check

2017-01-13 Thread Matt Riedemann
** Also affects: nova/newton
   Importance: Undecided
   Status: New

** Changed in: nova/newton
   Status: New => Confirmed

** Changed in: nova/newton
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1656479

Title:
  nova-manage cell_v2 verify_instance has unnecessary uuid check

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) newton series:
  Confirmed

Bug description:
  The nova-manage cell_v2 verify_instance command has a check to see if
  the uuid argument is not provided and then fails, breaking it's own
  rule about not printing anything if --quiet is used. However, it
  doesn't even need that check because argparse will handle the
  validation of --uuid not being provided:

  
https://github.com/openstack/nova/blob/a18f601753f92ff4a2a42be0962a188f583bbfb9/nova/cmd/manage.py#L1338

  stack@ocata:~$ nova-manage cell_v2 verify_instance --quiet
  usage: nova-manage cell_v2 verify_instance [-h] --uuid  [--quiet]
  nova-manage cell_v2 verify_instance: error: argument --uuid is required
  stack@ocata:~$ echo $?
  2

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1656479/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656479] [NEW] nova-manage cell_v2 verify_instance has unnecessary uuid check

2017-01-13 Thread Matt Riedemann
Public bug reported:

The nova-manage cell_v2 verify_instance command has a check to see if
the uuid argument is not provided and then fails, breaking it's own rule
about not printing anything if --quiet is used. However, it doesn't even
need that check because argparse will handle the validation of --uuid
not being provided:

https://github.com/openstack/nova/blob/a18f601753f92ff4a2a42be0962a188f583bbfb9/nova/cmd/manage.py#L1338

stack@ocata:~$ nova-manage cell_v2 verify_instance --quiet
usage: nova-manage cell_v2 verify_instance [-h] --uuid  [--quiet]
nova-manage cell_v2 verify_instance: error: argument --uuid is required
stack@ocata:~$ echo $?
2

** Affects: nova
 Importance: Low
 Assignee: Matt Riedemann (mriedem)
 Status: Triaged


** Tags: cells nova-manage

** Changed in: nova
 Assignee: (unassigned) => Matt Riedemann (mriedem)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1656479

Title:
  nova-manage cell_v2 verify_instance has unnecessary uuid check

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  The nova-manage cell_v2 verify_instance command has a check to see if
  the uuid argument is not provided and then fails, breaking it's own
  rule about not printing anything if --quiet is used. However, it
  doesn't even need that check because argparse will handle the
  validation of --uuid not being provided:

  
https://github.com/openstack/nova/blob/a18f601753f92ff4a2a42be0962a188f583bbfb9/nova/cmd/manage.py#L1338

  stack@ocata:~$ nova-manage cell_v2 verify_instance --quiet
  usage: nova-manage cell_v2 verify_instance [-h] --uuid  [--quiet]
  nova-manage cell_v2 verify_instance: error: argument --uuid is required
  stack@ocata:~$ echo $?
  2

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1656479/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1655255] Re: scheduler_hints not working

2017-01-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/418243
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=49a5e68a17bd183e019627ab9bb3746494e010f5
Submitter: Jenkins
Branch:master

commit 49a5e68a17bd183e019627ab9bb3746494e010f5
Author: liyingjun 
Date:   Tue Jan 10 15:58:27 2017 +0800

Add missing scheduler_hints to _optional_create

The scheduler_hints option is missing from server_create rest api.
This patch adds it.

Change-Id: Iab587abecbfd73fec8e966ca86cdde7242c80207
Closes-bug: #1655255


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1655255

Title:
  scheduler_hints not working

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  scheduler_hints is dropped when creating instance because it doesn't
  exist in _optional_create
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/api/rest/nova.py#L346-L350

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1655255/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656127] Re: 404 error on contributor docs pages

2017-01-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/419903
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=d8d41ae5d6ceda8677b1a46d1bbaf0a0ff4e593b
Submitter: Jenkins
Branch:master

commit d8d41ae5d6ceda8677b1a46d1bbaf0a0ff4e593b
Author: John Davidge 
Date:   Fri Jan 13 11:43:43 2017 +

Fix broken links in devref

Change-Id: I42b58963125166763afeb46e5c4575b9913c867a
Closes-Bug: #1656127


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1656127

Title:
  404 error on contributor docs pages

Status in neutron:
  Fix Released

Bug description:
  Go to
  
http://docs.openstack.org/developer/neutron/devref/template_model_sync_test.html
  and there's a broken link to oslo.db docs for test_migrations,
  
http://docs.openstack.org/developer/oslo.db/api/sqlalchemy/test_migrations.html.
  Other broken links and referring pages include:

  
  {"url": 
"http://docs.openstack.org/newton/networking-guide/scenario_legacy_ovs.html;, 
"status": 404, "referer": 
"http://docs.openstack.org/developer/neutron/devref/layer3.html"},
  {"url": 
"http://docs.openstack.org/newton/networking-guide/deploy_scenario4b.html;, 
"status": 404, "referer": 
"http://docs.openstack.org/developer/neutron/devref/linuxbridge_agent.html"},
  {"url": 
"http://docs.openstack.org/newton/networking-guide/deploy_scenario3b.html;, 
"status": 404, "referer": 
"http://docs.openstack.org/developer/neutron/devref/linuxbridge_agent.html"},
  {"url": 
"http://docs.openstack.org/newton/networking-guide/deploy_scenario1b.html;, 
"status": 404, "referer": 
"http://docs.openstack.org/developer/neutron/devref/linuxbridge_agent.html"},
  {"url": 
"http://docs.openstack.org/newton/networking-guide/scenario_legacy_lb.html;, 
"status": 404, "referer": 
"http://docs.openstack.org/developer/neutron/devref/linuxbridge_agent.html"},

  While we're working hard to get redirects in place, better to get the
  "real" link in there when you can.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1656127/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1654032] Re: CI: unable to ping floating-ip in pingtest

2017-01-13 Thread Ben Nemec
** Changed in: tripleo
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1654032

Title:
  CI: unable to ping floating-ip in pingtest

Status in neutron:
  Fix Released
Status in tripleo:
  Fix Released

Bug description:
  We're seeing a lot of spurious failures in the ping test on HA jobs
  lately.

  Logstash query:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=build_name%3A%20
  *tripleo-
  
ci*%20AND%20build_status%3A%20FAILURE%20AND%20message%3A%20%5C%22From%2010.0.0.1%20icmp_seq%3D1%20Destination%20Host%20Unreachable%5C%22

  Sample failure log: http://logs.openstack.org/76/416576/1/check-
  tripleo/gate-tripleo-ci-centos-7-ovb-
  ha/6db60be/console.html#_2017-01-04_16_40_34_770751

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1654032/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656297] Re: Remove references to defunct Stadium documentation

2017-01-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/419974
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=f57dbf3f1329c0b56bceb7c919f121369b865de2
Submitter: Jenkins
Branch:master

commit f57dbf3f1329c0b56bceb7c919f121369b865de2
Author: John Davidge 
Date:   Fri Jan 13 13:48:50 2017 +

Remove references to defunct Stadium docs

Change-Id: I4f755e88bfbe19e58de81dc7ec4d12a26a43c8c0
Closes-Bug: #1656297


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1656297

Title:
  Remove references to defunct Stadium documentation

Status in neutron:
  Fix Released

Bug description:
  [1] and [2] both contain references to out-of-date Stadium processes.
  Update them to reflect the new way of doing things.

  Sections to be updated:

  Reference to doc/source/stadium/sub_projects.rst in [3].
  Reference to [4] in [5].
  Reference to [6] in [7].

  [1] http://docs.openstack.org/developer/neutron/devref/contribute.html
  [2] http://docs.openstack.org/developer/neutron/policies/bugs.html
  [3] 
http://docs.openstack.org/developer/neutron/devref/contribute.html#contribution-process
  [4] http://docs.openstack.org/developer/neutron/stadium/sub_projects.html
  [5] 
http://docs.openstack.org/developer/neutron/devref/contribute.html#project-initial-setup
  [6] 
http://docs.openstack.org/developer/neutron/stadium/sub_project_guidelines.html#sub-project-release-process
  [7] 
http://docs.openstack.org/developer/neutron/policies/bugs.html#plugin-and-driver-repositories

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1656297/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656403] [NEW] The py-modindex for neutron-dynamic-routing docs is a broken link

2017-01-13 Thread Anne Gentle
Public bug reported:

There's a broken link for the Module Index on
http://docs.openstack.org/developer/neutron-dynamic-routing/.

Also, that doc set has these broken links:


{"url": 
"http://docs.openstack.org/developer/neutron-dynamic-routing/functionality/route-advertisement.rst;,
 "status": 404, "referer": 
"http://docs.openstack.org/developer/neutron-dynamic-routing/functionality/bgp-speaker.html"},
   
{"url": 
"http://docs.openstack.org/developer/neutron-dynamic-routing/others/testing.rst;,
 "status": 404, "referer": 
"http://docs.openstack.org/developer/neutron-dynamic-routing/functionality/bgp-speaker.html"},
   
{"url": 
"http://docs.openstack.org/developer/neutron-dynamic-routing/functionality/dynamic-routing-agent.rst;,
 "status": 404, "referer": 
"http://docs.openstack.org/developer/neutron-dynamic-routing/functionality/bgp-speaker.html"},
   
{"url": 
"http://docs.openstack.org/developer/neutron-dynamic-routing/design/drivers.rst;,
 "status": 404, "referer": 
"http://docs.openstack.org/developer/neutron-dynamic-routing/functionality/bgp-speaker.html"},

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc

** Description changed:

  There's a broken link for the Module Index on
  http://docs.openstack.org/developer/neutron-dynamic-routing/.
+ 
+ Also, that doc set has these broken links:
+ 
+ 
+ {"url": 
"http://docs.openstack.org/developer/neutron-dynamic-routing/functionality/route-advertisement.rst;,
 "status": 404, "referer": 
"http://docs.openstack.org/developer/neutron-dynamic-routing/functionality/bgp-speaker.html"},
   
+ {"url": 
"http://docs.openstack.org/developer/neutron-dynamic-routing/others/testing.rst;,
 "status": 404, "referer": 
"http://docs.openstack.org/developer/neutron-dynamic-routing/functionality/bgp-speaker.html"},
   
+ {"url": 
"http://docs.openstack.org/developer/neutron-dynamic-routing/functionality/dynamic-routing-agent.rst;,
 "status": 404, "referer": 
"http://docs.openstack.org/developer/neutron-dynamic-routing/functionality/bgp-speaker.html"},
   
+ {"url": 
"http://docs.openstack.org/developer/neutron-dynamic-routing/design/drivers.rst;,
 "status": 404, "referer": 
"http://docs.openstack.org/developer/neutron-dynamic-routing/functionality/bgp-speaker.html"},

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1656403

Title:
  The py-modindex for neutron-dynamic-routing docs is a broken link

Status in neutron:
  New

Bug description:
  There's a broken link for the Module Index on
  http://docs.openstack.org/developer/neutron-dynamic-routing/.

  Also, that doc set has these broken links:

  
  {"url": 
"http://docs.openstack.org/developer/neutron-dynamic-routing/functionality/route-advertisement.rst;,
 "status": 404, "referer": 
"http://docs.openstack.org/developer/neutron-dynamic-routing/functionality/bgp-speaker.html"},
   
  {"url": 
"http://docs.openstack.org/developer/neutron-dynamic-routing/others/testing.rst;,
 "status": 404, "referer": 
"http://docs.openstack.org/developer/neutron-dynamic-routing/functionality/bgp-speaker.html"},
   
  {"url": 
"http://docs.openstack.org/developer/neutron-dynamic-routing/functionality/dynamic-routing-agent.rst;,
 "status": 404, "referer": 
"http://docs.openstack.org/developer/neutron-dynamic-routing/functionality/bgp-speaker.html"},
   
  {"url": 
"http://docs.openstack.org/developer/neutron-dynamic-routing/design/drivers.rst;,
 "status": 404, "referer": 
"http://docs.openstack.org/developer/neutron-dynamic-routing/functionality/bgp-speaker.html"},

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1656403/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656386] [NEW] Memory leaks on linuxbridge job

2017-01-13 Thread Darek Smigiel
Public bug reported:

Couple examples of recent leakages for linuxbridge job [1], [2]

[1] 
http://logs.openstack.org/73/373973/13/check/gate-tempest-dsvm-neutron-linuxbridge-ubuntu-xenial/295d92f/logs/syslog.txt.gz#_Jan_11_13_56_32
[2] 
http://logs.openstack.org/59/382659/26/check/gate-tempest-dsvm-neutron-linuxbridge-ubuntu-xenial/7de01d0/logs/syslog.txt.gz#_Jan_11_15_54_36

Close to the end of running tests, consumption of swap growths pretty quickly, 
exceeding 2GBs.
I didn't find root cause of that.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1656386

Title:
  Memory leaks on linuxbridge job

Status in neutron:
  New

Bug description:
  Couple examples of recent leakages for linuxbridge job [1], [2]

  [1] 
http://logs.openstack.org/73/373973/13/check/gate-tempest-dsvm-neutron-linuxbridge-ubuntu-xenial/295d92f/logs/syslog.txt.gz#_Jan_11_13_56_32
  [2] 
http://logs.openstack.org/59/382659/26/check/gate-tempest-dsvm-neutron-linuxbridge-ubuntu-xenial/7de01d0/logs/syslog.txt.gz#_Jan_11_15_54_36

  Close to the end of running tests, consumption of swap growths pretty 
quickly, exceeding 2GBs.
  I didn't find root cause of that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1656386/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1642692] Re: Protocol can't be deleted after federated_user is created

2017-01-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/415906
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=45f7ff3918ce8d05585d1c2e1740462e711965fe
Submitter: Jenkins
Branch:master

commit 45f7ff3918ce8d05585d1c2e1740462e711965fe
Author: Rodrigo Duarte Sousa 
Date:   Tue Jan 3 10:41:07 2017 -0300

Cascade delete federated_user fk

The bug was caused by a foreign key in the federated_user table. This
key prevents a protocol from being deleted after a successful
authentication has happened (so the creation of a federated user
via shadowing). We take advantage of the same foreign key by adding the
cascade delete behavior to it.

Closes-Bug: 1642692

Change-Id: I3b3e265d20f0cfe0ee10c6a274d9bdf4e840b742


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1642692

Title:
  Protocol can't be deleted after federated_user is created

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  When authenticating a user via federation, a federated_user entry is
  created in keystone's database, an example of such entry is below:

  mysql> select * from federated_user;
  
++--+--+-+---+-+
  | id | user_id  | idp_id   | protocol_id | unique_id  
   | display_name|
  
++--+--+-+---+-+
  |  1 | 15ddf8fda20842c68b9b6d91d1a7 | testshib | mapped  | 
myself%40testshib.org | mys...@testshib.org |
  
++--+--+-+---+-+

  The federated_user_protocol_id foreign key prevents the protocol
  deletion:

  Details: An unexpected error prevented the server from fulfilling your
  request: (pymysql.err.IntegrityError) (1451, u'Cannot delete or update
  a parent row: a foreign key constraint fails
  (`keystone`.`federated_user`, CONSTRAINT
  `federated_user_protocol_id_fkey` FOREIGN KEY (`protocol_id`,
  `idp_id`) REFERENCES `federation_protocol` (`id`, `idp_id`))') [SQL:
  u'DELETE FROM federation_protocol WHERE federation_protocol.id =
  %(id)s AND federation_protocol.idp_id = %(idp_id)s'] [parameters:
  {'idp_id': u'testshib', 'id': u'mapped'}]

  This can be also happening with the "idp_id" column as well.

  This prevents automated tests like [1] to properly work, since it
  creates and destroys the identity provider, mapping and protocol
  during its execution.

  [1] https://review.openstack.org/#/c/324769/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1642692/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656101] Re: delete swift container does not work

2017-01-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/419671
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=874344028529df641f34c5d0af618b3e3355fac0
Submitter: Jenkins
Branch:master

commit 874344028529df641f34c5d0af618b3e3355fac0
Author: Richard Jones 
Date:   Fri Jan 13 09:22:47 2017 +1100

Prevent a "link" click on container trash icon

For some reason the $event.stopPropagation() is causing a
"link" follow to "/". This patch removes it, even though
it means the accordion will expand on trash icon clicks.

Change-Id: I0da83149c3256c09228fc3dc2490f601c453a551
Closes-Bug: 1656101


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1656101

Title:
  delete swift container does not work

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  When trying to delete a container within swift, we get a pop-up to
  confirm we want it deleted.

  If you act very quickly, you can confirm "yes" and delete the
  container.   If you are slow, you get redirected back to the overview
  / home page.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1656101/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656349] Re: Incompatiblilty with webob 1.7.0

2017-01-13 Thread Steve Martinelli
*** This bug is a duplicate of bug 1653646 ***
https://bugs.launchpad.net/bugs/1653646

fixed in https://review.openstack.org/#/c/416198/
dupe of https://bugs.launchpad.net/keystonemiddleware/+bug/1653646

** Also affects: keystonemiddleware
   Importance: Undecided
   Status: New

** Changed in: keystone
   Status: New => Invalid

** This bug has been marked a duplicate of bug 1653646
   TypeError: You cannot set the body to a text value without a charset (WebOb 
1.6.3)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1656349

Title:
  Incompatiblilty with webob 1.7.0

Status in OpenStack Identity (keystone):
  Invalid
Status in keystonemiddleware:
  New

Bug description:
  
 File "/<>/keystonemiddleware/auth_token/__init__.py", line 
320, in __call__
   response = self.process_request(req)
 File "/<>/keystonemiddleware/auth_token/__init__.py", line 
582, in process_request
   content_type='application/json')
 File "/usr/lib/python3/dist-packages/webob/exc.py", line 268, in __init__
   **kw)
 File "/usr/lib/python3/dist-packages/webob/response.py", line 310, in 
__init__
   "You cannot set the body to a text value without a "
   TypeError: You cannot set the body to a text value without a charset

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1656349/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656276] Re: Error running nova-manage cell_v2 simple_cell_setup when configuring nova with puppet-nova

2017-01-13 Thread Matt Riedemann
I'm also tracking some of these open questions/issues in the wiki here:

https://wiki.openstack.org/wiki/Nova-Cells-v2

** Changed in: nova
 Assignee: (unassigned) => Sylvain Bauza (sylvain-bauza)

** Changed in: nova
   Status: New => Confirmed

** Also affects: nova/newton
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1656276

Title:
  Error running nova-manage  cell_v2 simple_cell_setup when configuring
  nova with puppet-nova

Status in OpenStack Compute (nova):
  Confirmed
Status in OpenStack Compute (nova) newton series:
  New
Status in puppet-nova:
  New
Status in tripleo:
  Triaged

Bug description:
  When installing and configuring nova with puppet-nova (with either
  tripleo, packstack or puppet-openstack-integration), we are getting
  following errors:

  Debug: Executing: '/usr/bin/nova-manage  cell_v2 simple_cell_setup 
--transport-url=rabbit://guest:guest@172.19.2.159:5672/?ssl=0'
  Debug: 
/Stage[main]/Nova::Db::Sync_cell_v2/Exec[nova-cell_v2-simple-cell-setup]/returns:
 Sleeping for 5 seconds between tries
  Notice: 
/Stage[main]/Nova::Db::Sync_cell_v2/Exec[nova-cell_v2-simple-cell-setup]/returns:
 Cell0 is already setup.
  Notice: 
/Stage[main]/Nova::Db::Sync_cell_v2/Exec[nova-cell_v2-simple-cell-setup]/returns:
 No hosts found to map to cell, exiting.

  The issue seems to be that it's running "nova-manage  cell_v2
  simple_cell_setup" as part of the nova database initialization when no
  compute nodes have been created but it returns 1 in that case [1].
  However, note that the previous steps (Cell0 mapping and schema
  migration) were successfully run.

  I think for nova bootstrap a reasonable orchestrated workflow would
  be:

  1. Create required databases (including the one for cell0).
  2. Nova db sync
  3. nova cell0 mapping and schema creation.
  4. Adding compute nodes
  5. mapping compute nodes (by running nova-manage cell_v2 discover_hosts)

  For step 3 we'd need to get simple_cell_setup to return 0 when not
  having compute nodes, or having a different command.

  With current behavior of nova-manage the only working workflow we can
  do is:

  1. Create required databases (including the one for cell0).
  2. Nova db sync
  3. Adding all compute nodes
  4. nova cell0 mapping and schema creation with "nova-manage cell_v2 
simple_cell_setup".

  Am I right?, Is there any better alternative?

  
  [1] 
https://github.com/openstack/nova/blob/master/nova/cmd/manage.py#L1112-L1114

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1656276/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656355] [NEW] The py-modindex for networking-bgpvpn docs is a broken link

2017-01-13 Thread Anne Gentle
Public bug reported:

The Module Index link on this page:

http://docs.openstack.org/developer/networking-bgpvpn/

Is giving a 404 error.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: bgpvpn doc l3-bgp

** Tags added: doc

** Tags added: bgpvpn l3-bgp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1656355

Title:
  The py-modindex for networking-bgpvpn docs is a broken link

Status in neutron:
  New

Bug description:
  The Module Index link on this page:

  http://docs.openstack.org/developer/networking-bgpvpn/

  Is giving a 404 error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1656355/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656349] [NEW] Incompatiblilty with webob 1.7.0

2017-01-13 Thread Chuck Short
Public bug reported:


   File "/<>/keystonemiddleware/auth_token/__init__.py", line 320, 
in __call__
 response = self.process_request(req)
   File "/<>/keystonemiddleware/auth_token/__init__.py", line 582, 
in process_request
 content_type='application/json')
   File "/usr/lib/python3/dist-packages/webob/exc.py", line 268, in __init__
 **kw)
   File "/usr/lib/python3/dist-packages/webob/response.py", line 310, in 
__init__
 "You cannot set the body to a text value without a "
 TypeError: You cannot set the body to a text value without a charset

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1656349

Title:
  Incompatiblilty with webob 1.7.0

Status in OpenStack Identity (keystone):
  New

Bug description:
  
 File "/<>/keystonemiddleware/auth_token/__init__.py", line 
320, in __call__
   response = self.process_request(req)
 File "/<>/keystonemiddleware/auth_token/__init__.py", line 
582, in process_request
   content_type='application/json')
 File "/usr/lib/python3/dist-packages/webob/exc.py", line 268, in __init__
   **kw)
 File "/usr/lib/python3/dist-packages/webob/response.py", line 310, in 
__init__
   "You cannot set the body to a text value without a "
   TypeError: You cannot set the body to a text value without a charset

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1656349/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656346] [NEW] Devref has a link to py-modindex that gives a 404 error

2017-01-13 Thread Anne Gentle
Public bug reported:

On this page: http://docs.openstack.org/developer/neutron-
lib/devref/index.html there's a link to py-modindex but it doesn't
exist.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1656346

Title:
  Devref has a link to py-modindex that gives a 404 error

Status in neutron:
  New

Bug description:
  On this page: http://docs.openstack.org/developer/neutron-
  lib/devref/index.html there's a link to py-modindex but it doesn't
  exist.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1656346/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1614815] Re: api-ref: security-group api show wrong description of security_group_id parameter

2017-01-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/357593
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lib/commit/?id=6cfa2c03d77aec728d5fed8fa4f6831dafa8bdc7
Submitter: Jenkins
Branch:master

commit 6cfa2c03d77aec728d5fed8fa4f6831dafa8bdc7
Author: Nguyen Phuong An 
Date:   Fri Aug 19 10:44:34 2016 +0700

api-ref: Fix descriptions of sec-grp parameters

This patch corrects descriptions of request/response parmaters of
security-groups api.

Partially-Implements: blueprint neutron-in-tree-api-ref
Closes-Bug: #1614815

Co-Authored-By: Anindita Das 
Change-Id: I48df20026118f6f62bbb7da4a216227b89b9bf3d


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1614815

Title:
  api-ref: security-group api show wrong description of
  security_group_id parameter

Status in neutron:
  Fix Released

Bug description:
  Security-groups API show wrong description of 'security_group_id' and
  'id' parameter in request/response parameters.

  [1] http://developer.openstack.org/api-
  ref/networking/v2/index.html?expanded=show-security-group-detail
  ,update-security-group-detail,delete-security-group-detail#show-
  security-group

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1614815/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656241] Re: got an unexpected keyword argument 'app_name'

2017-01-13 Thread Boris Bobrov
** Also affects: python-openstackclient
   Importance: Undecided
   Status: New

** No longer affects: keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1656241

Title:
  got an unexpected keyword argument 'app_name'

Status in python-openstackclient:
  Incomplete

Bug description:
  I'm going to deploy Newton and after I installed Keystone I got this
  error:

  # openstack --debug endpoint list
  START with options: [u'--debug', u'endpoint', u'list']
  options: Namespace(access_key='', access_secret='***', access_token='***', 
access_token_endpoint='', access_token_type='', auth_type='', 
auth_url='http://p025.domain.com:35357/v3', authorization_code='', cacert=None, 
cert='', client_id='', client_secret='***', cloud='', consumer_key='', 
consumer_secret='***', debug=True, default_domain='default', 
default_domain_id='', default_domain_name='', deferred_help=False, 
discovery_endpoint='', domain_id='', domain_name='', endpoint='', 
identity_provider='', identity_provider_url='', insecure=None, interface='', 
key='', log_file=None, old_profile=None, openid_scope='', 
os_baremetal_api_version='1.9', os_beta_command=False, 
os_compute_api_version='', os_container_infra_api_version='1', 
os_dns_api_version='2', os_identity_api_version='3', os_image_api_version='2', 
os_network_api_version='', os_object_api_version='', 
os_orchestration_api_version='1', os_project_id=None, os_project_name=None, 
os_volume_api_version='', os_workflow_api_version='2'
 , passcode='', password='***', profile=None, project_domain_id='', 
project_domain_name='default', project_id='', project_name='admin', 
protocol='', redirect_uri='', region_name='', timing=False, token='***', 
trust_id='', url='', user_domain_id='', user_domain_name='default', user_id='', 
username='admin', verbose_level=3, verify=None)
  Auth plugin password selected
  auth_config_hook(): {'auth_type': 'password', 'beta_command': False, 
u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', 
u'metering_api_version': u'2', 'auth_url': 'http://p025.domain.com:35357/v3', 
u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 
'networks': [], u'image_api_version': '2', 'verify': True, u'dns_api_version': 
'2', u'object_store_api_version': u'1', 'username': 'admin', 
'container_infra_api_version': '1', 'verbose_level': 3, 'region_name': '', 
'api_timeout': None, u'baremetal_api_version': '1.9', 'auth': 
{'user_domain_name': 'default', 'project_name': 'admin', 'project_domain_name': 
'default'}, 'default_domain': 'default', u'container_api_version': u'1', 
u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 
u'orchestration_api_version': '1', 'timing': False, 'password': '***', 
'cacert': None, u'key_manager_api_version': u'v1', 'workflow_api_version': '2', 
'deferred_help': False, u'identity_api_version': '3'
 , u'volume_api_version': u'2', 'cert': None, u'secgroup_source': u'neutron', 
u'status': u'active', 'debug': True, u'interface': None, 
u'disable_vendor_agent': {}}
  defaults: {u'auth_type': 'password', u'status': u'active', 
u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', 
'api_timeout': None, u'baremetal_api_version': u'1', u'image_api_version': 
u'2', u'metering_api_version': u'2', u'image_api_use_tasks': False, 
u'floating_ip_source': u'neutron', u'orchestration_api_version': u'1', 
'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': 
u'qcow2', u'key_manager_api_version': u'v1', 'verify': True, 
u'identity_api_version': u'2.0', u'volume_api_version': u'2', 'cert': None, 
u'secgroup_source': u'neutron', u'container_api_version': u'1', 
u'dns_api_version': u'2', u'object_store_api_version': u'1', u'interface': 
None, u'disable_vendor_agent': {}}
  cloud cfg: {'auth_type': 'password', 'beta_command': False, 
u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', 
u'metering_api_version': u'2', 'auth_url': 'http://p025.domain.com:35357/v3', 
u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 
'networks': [], u'image_api_version': '2', 'verify': True, u'dns_api_version': 
'2', u'object_store_api_version': u'1', 'username': 'admin', 
'container_infra_api_version': '1', 'verbose_level': 3, 'region_name': '', 
'api_timeout': None, u'baremetal_api_version': '1.9', 'auth': 
{'user_domain_name': 'default', 'project_name': 'admin', 'project_domain_name': 
'default'}, 'default_domain': 'default', u'container_api_version': u'1', 
u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 
u'orchestration_api_version': '1', 'timing': False, 'password': '***', 
'cacert': None, u'key_manager_api_version': u'v1', 'workflow_api_version': '2', 
'deferred_help': False, u'identity_api_version': '3', u'volum
 e_api_version': u'2', 'cert': None, u'secgroup_source': u'neutron', 

[Yahoo-eng-team] [Bug 1394299] Re: Shared image shows as private

2017-01-13 Thread Brian Rosmaita
** Changed in: glance
   Status: Invalid => In Progress

** Changed in: glance
   Importance: Medium => Critical

** Changed in: glance
 Assignee: (unassigned) => Dharini Chandrasekar (dharini-chandrasekar)

** Changed in: glance
Milestone: None => ocata-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1394299

Title:
  Shared image shows as private

Status in Glance:
  In Progress

Bug description:

  image  281d576a-9e4b-4d11-94bb-8b1e89f62a71 is owned by this user and
  correctly shows as 'private', however image '795518ca-
  13a6-4493-b3a3-91519ad7c067' is not owned by this user, it is a shared
  image.


   $ glance --os-image-api-version 2 image-list --visibility shared
   +--+-+
   | ID| Name|
   +--+-+
   | 795518ca-13a6-4493-b3a3-91519ad7c067 | accepted--image | <<< correct, the 
shared image is shown
   +--+-+

  
   $ glance --os-image-api-version 2 image-list --visibility private
   +--+-+
   | ID   | Name|
   +--+-+
   | 281d576a-9e4b-4d11-94bb-8b1e89f62a71 | private-image   |
   | 795518ca-13a6-4493-b3a3-91519ad7c067 | accepted--image |  <<< wrong, I 
think, this is shared, not private
   +--+-+

  
   $ glance --os-image-api-version 2 image-show 
281d576a-9e4b-4d11-94bb-8b1e89f62a71
   +--+--+
   | Property | Value|
   +--+--+
   | checksum | 398759a311bf25c6f1d67e753bb24dae |
   | container_format | bare |
   | created_at   | 2014-11-18T11:16:33Z |
   | disk_format  | raw  |
   | id   | 281d576a-9e4b-4d11-94bb-8b1e89f62a71 |
   | min_disk | 0|
   | min_ram  | 0|
   | name | private-image|
   | owner| f68be3a5c2b14721a9e0ed2fcb750481 |
   | protected| False|
   | size | 106  |
   | status   | active   |
   | tags | []   |
   | updated_at   | 2014-11-18T15:51:35Z |
   | visibility   | private  | <<< correct
   +--+--+

  
   (py27)ubuntu in ~/git/python-glanceclient on master*
   $ glance --os-image-api-version 2 image-show 
795518ca-13a6-4493-b3a3-91519ad7c067
   +--+--+
   | Property | Value|
   +--+--+
   | checksum | 398759a311bf25c6f1d67e753bb24dae |
   | container_format | bare |
   | created_at   | 2014-11-18T11:14:58Z |
   | disk_format  | raw  |
   | id   | 795518ca-13a6-4493-b3a3-91519ad7c067 |
   | min_disk | 0|
   | min_ram  | 0|
   | name | accepted--image  |
   | owner| 2dcea26aa97a41fa9547a133f6c7f5b4 | <<< different 
owner
   | protected| False|
   | size | 106  |
   | status   | active   |
   | tags | []   |
   | updated_at   | 2014-11-19T16:32:33Z |
   | visibility   | private  | <<< wrong, I 
think
   +--+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1394299/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1651704] Re: Errors when starting introspection are silently ignored

2017-01-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/418423
Committed: 
https://git.openstack.org/cgit/openstack/tripleo-common/commit/?id=c7b01eba55e5d133ccc19451cf4727170a5dbdd0
Submitter: Jenkins
Branch:master

commit c7b01eba55e5d133ccc19451cf4727170a5dbdd0
Author: Dougal Matthews 
Date:   Tue Jan 10 14:35:36 2017 +

Fail the baremetal workflows when sending a "FAILED" message

When Mistral workflows execute a second workflow (a sub-workflow
execution), the parent workflow can't easily determine if sub-workflow
failed.  This is because the failure is communicated via a Zaqar message
only and when a workflow ends with a successful Zaqar message it appears
have been successful. This problem surfaces because parent workflows
should have an "on-error" attribute but it is never called, as the
workflow doesn't error.

This change marks the workflow as failed if the message has the status
"FAILED". Now when a sub-workflow fails, the task that called it should
have the on-error triggered. Previously it would always go to
on-success.

Closes-Bug: #1651704
Change-Id: I60444ec692351c44753649b59b7c1d7c4b61fa8e


** Changed in: tripleo
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1651704

Title:
  Errors when starting introspection are silently ignored

Status in Ironic Inspector:
  Incomplete
Status in OpenStack Compute (nova):
  Invalid
Status in tripleo:
  Fix Released
Status in ironic-inspector package in Ubuntu:
  Invalid

Bug description:
  Running tripleo using tripleo-quickstart with minimal profile
  (step_introspect: true) for master branch, overcloud deploy with
  error:

  ResourceInError: resources.Controller: Went to status ERROR due to
  "Message: No valid host was found. There are not enough hosts
  available., Code: 500"

  Looking at nova-scheduler.log, following errors are found:

  https://ci.centos.org/artifacts/rdo/jenkins-tripleo-quickstart-
  promote-master-delorean-minimal-806/undercloud/var/log/nova/nova-
  scheduler.log.gz

  2016-12-21 06:45:56.822 17759 DEBUG nova.scheduler.host_manager
  [req-f889dbc0-1096-4f92-80fc-3c5bdcb1ad29
  4f103e0230074c2488b7359bc079d323 f21dbfb3b2c840059ec2a0bba03b7385 - -
  -] Update host state from compute node:
  
ComputeNode(cpu_allocation_ratio=16.0,cpu_info='',created_at=2016-12-21T06:38:28Z,current_workload=0,deleted=False,deleted_at=None,disk_allocation_ratio=1.0,disk_available_least=0,free_disk_gb=0,free_ram_mb=0,host='undercloud',host_ip=192.168.23.46,hypervisor_hostname
  ='c6f8f4ba-9c7c-4c87-b95a-
  
67a5861b7bec',hypervisor_type='ironic',hypervisor_version=1,id=2,local_gb=0,local_gb_used=0,memory_mb=0,memory_mb_used=0,metrics='[]',numa_topology=None,pci_device_pools=PciDevicePoolList,ram_allocation_ratio=1.0,running_vms=0,service_id=None,stats={boot_option='local',cpu_aes='true',cpu_arch='x86_64',cpu_hugepages='true',cpu_hugepages_1g='true',cpu_vt='true',profile='control'},supported_hv_specs=[HVSpec],updated_at=2016-12-21T06:45:38Z,uuid
  =ac2742da-39fb-4ca4-9f78-8e04f703c7a6,vcpus=0,vcpus_used=0)
  _locked_update /usr/lib/python2.7/site-
  packages/nova/scheduler/host_manager.py:168

  2016-12-21 06:47:48.893 17759 DEBUG
  nova.scheduler.filters.ram_filter [req-2aece1c8-6d3e-457b-
  92d7-a3177680c82e 4f103e0230074c2488b7359bc079d323
  f21dbfb3b2c840059ec2a0bba03b7385 - - -] (undercloud, c6f8f4ba-9c7c-
  4c87-b95a-67a5861b7bec) ram: 0MB disk: 0MB io_ops: 0 instances: 0 does
  not have 8192 MB usable ram before overcommit, it only has 0 MB.
  host_passes /usr/lib/python2.7/site-
  packages/nova/scheduler/filters/ram_filter.py:45

  2016-12-21 06:47:48.894 17759 INFO nova.filters [req-2aece1c8
  -6d3e-457b-92d7-a3177680c82e 4f103e0230074c2488b7359bc079d323
  f21dbfb3b2c840059ec2a0bba03b7385 - - -] Filter RamFilter returned 0
  hosts

  My guess is that node introspection is failing to get proper node
  information.

  Full logs can be found in https://ci.centos.org/artifacts/rdo/jenkins-
  tripleo-quickstart-promote-master-delorean-minimal-806/undercloud/

  We have hit this issue twice in the last runs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic-inspector/+bug/1651704/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656297] [NEW] Remove references to defunct Stadium documentation

2017-01-13 Thread John Davidge
Public bug reported:

[1] and [2] both contain references to out-of-date Stadium processes.
Update them to reflect the new way of doing things.

[1] http://docs.openstack.org/developer/neutron/devref/contribute.html
[2] http://docs.openstack.org/developer/neutron/policies/bugs.html

** Affects: neutron
 Importance: Low
 Assignee: John Davidge (john-davidge)
 Status: New


** Tags: doc

** Changed in: neutron
   Importance: Undecided => Low

** Changed in: neutron
 Assignee: (unassigned) => John Davidge (john-davidge)

** Tags added: doc

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1656297

Title:
  Remove references to defunct Stadium documentation

Status in neutron:
  New

Bug description:
  [1] and [2] both contain references to out-of-date Stadium processes.
  Update them to reflect the new way of doing things.

  [1] http://docs.openstack.org/developer/neutron/devref/contribute.html
  [2] http://docs.openstack.org/developer/neutron/policies/bugs.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1656297/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656262] Re: test_floatingip_list_with_pagination failure

2017-01-13 Thread YAMAMOTO Takashi
** Tags added: gate-failure

** Changed in: networking-midonet
   Importance: Undecided => Critical

** Changed in: networking-midonet
   Status: New => In Progress

** Changed in: networking-midonet
Milestone: None => 4.0.0

** Changed in: networking-midonet
 Assignee: (unassigned) => YAMAMOTO Takashi (yamamoto)

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1656262

Title:
  test_floatingip_list_with_pagination failure

Status in networking-midonet:
  In Progress
Status in neutron:
  In Progress

Bug description:
  
midonet.neutron.tests.unit.test_midonet_plugin_ml2.TestMidonetL3NatExtraRoute.test_floatingip_list_with_pagination_reverse
  
--

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File 
"/Users/yamamoto/git/networking-midonet/.tox/py27/src/neutron/neutron/tests/base.py",
 line 114, in func
  return f(self, *args, **kwargs)
File 
"/Users/yamamoto/git/networking-midonet/.tox/py27/src/neutron/neutron/tests/base.py",
 line 114, in func
  return f(self, *args, **kwargs)
File 
"/Users/yamamoto/git/networking-midonet/.tox/py27/src/neutron/neutron/tests/base.py",
 line 114, in func
  return f(self, *args, **kwargs)
File 
"/Users/yamamoto/git/networking-midonet/.tox/py27/src/neutron/neutron/tests/unit/extensions/test_l3.py",
 line 2730, in test_floatingip_list_with_pagination_reverse
  ('floating_ip_address', 'asc'), 2, 2)
File 
"/Users/yamamoto/git/networking-midonet/.tox/py27/src/neutron/neutron/tests/unit/db/test_db_base_plugin_v2.py",
 line 746, in _test_list_with_pagination_reverse
  self.assertThat(len(res[resources]),
  KeyError: 'floatingips'

  
  Captured pythonlogging:
  ~~~
   WARNING [neutron.agent.securitygroups_rpc] Driver configuration doesn't 
match with enable_security_group
   WARNING [neutron.plugins.ml2.managers] Host filtering is disabled 
because at least one mechanism doesn't support it.
   WARNING [neutron.agent.securitygroups_rpc] Driver configuration doesn't 
match with enable_security_group
   WARNING [neutron.scheduler.dhcp_agent_scheduler] No more DHCP agents
   WARNING [neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api] Unable to 
schedule network 85a36d0b-3950-4c3b-aa00-8fbdab2b48a4: no agents available; 
will retry on subsequent port and subnet creation events.
   WARNING [neutron.scheduler.dhcp_agent_scheduler] No more DHCP agents
   WARNING [neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api] Unable to 
schedule network ce168e7d-70e7-412a-8333-711c3f13890b: no agents available; 
will retry on subsequent port and subnet creation events.
   WARNING [neutron.scheduler.dhcp_agent_scheduler] No more DHCP agents
   WARNING [neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api] Unable to 
schedule network c15a7692-272e-4140-bd9a-2faa8287deeb: no agents available; 
will retry on subsequent port and subnet creation events.
   WARNING [neutron.scheduler.dhcp_agent_scheduler] No more DHCP agents
   WARNING [neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api] Unable to 
schedule network 85a36d0b-3950-4c3b-aa00-8fbdab2b48a4: no agents available; 
will retry on subsequent port and subnet creation events.
   WARNING [neutron.scheduler.dhcp_agent_scheduler] No more DHCP agents
   WARNING [neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api] Unable to 
schedule network ce168e7d-70e7-412a-8333-711c3f13890b: no agents available; 
will retry on subsequent port and subnet creation events.
   WARNING [neutron.scheduler.dhcp_agent_scheduler] No more DHCP agents
   WARNING [neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api] Unable to 
schedule network c15a7692-272e-4140-bd9a-2faa8287deeb: no agents available; 
will retry on subsequent port and subnet creation events.
 ERROR [neutron.api.v2.resource] index failed: No details.
  Traceback (most recent call last):
File 
"/Users/yamamoto/git/networking-midonet/.tox/py27/src/neutron/neutron/api/v2/resource.py",
 line 79, in resource
  result = method(request=request, **args)
File 
"/Users/yamamoto/git/networking-midonet/.tox/py27/src/neutron/neutron/db/api.py",
 line 92, in wrapped
  setattr(e, '_RETRY_EXCEEDED', True)
File 
"/Users/yamamoto/git/networking-midonet/.tox/py27/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
  self.force_reraise()
File 
"/Users/yamamoto/git/networking-midonet/.tox/py27/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File 

[Yahoo-eng-team] [Bug 1656276] [NEW] Error running nova-manage cell_v2 simple_cell_setup when configuring nova with puppet-nova

2017-01-13 Thread Alfredo Moralejo
Public bug reported:

When installing and configuring nova with puppet-nova (with either
tripleo, packstack or puppet-openstack-integration), we are getting
following errors:

Debug: Executing: '/usr/bin/nova-manage  cell_v2 simple_cell_setup 
--transport-url=rabbit://guest:guest@172.19.2.159:5672/?ssl=0'
Debug: 
/Stage[main]/Nova::Db::Sync_cell_v2/Exec[nova-cell_v2-simple-cell-setup]/returns:
 Sleeping for 5 seconds between tries
Notice: 
/Stage[main]/Nova::Db::Sync_cell_v2/Exec[nova-cell_v2-simple-cell-setup]/returns:
 Cell0 is already setup.
Notice: 
/Stage[main]/Nova::Db::Sync_cell_v2/Exec[nova-cell_v2-simple-cell-setup]/returns:
 No hosts found to map to cell, exiting.

The issue seems to be that it's running "nova-manage  cell_v2
simple_cell_setup" as part of the nova database initialization when no
compute nodes have been created but it returns 1 in that case [1].
However, note that the previous steps (Cell0 mapping and schema
migration) were successfully run.

I think for nova bootstrap a reasonable orchestrated workflow would be:

1. Create required databases (including the one for cell0).
2. Nova db sync
3. nova cell0 mapping and schema creation.
4. Adding compute nodes
5. mapping compute nodes (by running nova-manage cell_v2 discover_hosts)

For step 3 we'd need to get simple_cell_setup to return 0 when not
having compute nodes, or having a different command.

With current behavior of nova-manage the only working workflow we can do
is:

1. Create required databases (including the one for cell0).
2. Nova db sync
3. Adding all compute nodes
4. nova cell0 mapping and schema creation with "nova-manage cell_v2 
simple_cell_setup".

Am I right?, Is there any better alternative?


[1] https://github.com/openstack/nova/blob/master/nova/cmd/manage.py#L1112-L1114

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: puppet-nova
 Importance: Undecided
 Status: New

** Affects: tripleo
 Importance: Undecided
 Status: New

** Also affects: tripleo
   Importance: Undecided
   Status: New

** Also affects: puppet-nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1656276

Title:
  Error running nova-manage  cell_v2 simple_cell_setup when configuring
  nova with puppet-nova

Status in OpenStack Compute (nova):
  New
Status in puppet-nova:
  New
Status in tripleo:
  New

Bug description:
  When installing and configuring nova with puppet-nova (with either
  tripleo, packstack or puppet-openstack-integration), we are getting
  following errors:

  Debug: Executing: '/usr/bin/nova-manage  cell_v2 simple_cell_setup 
--transport-url=rabbit://guest:guest@172.19.2.159:5672/?ssl=0'
  Debug: 
/Stage[main]/Nova::Db::Sync_cell_v2/Exec[nova-cell_v2-simple-cell-setup]/returns:
 Sleeping for 5 seconds between tries
  Notice: 
/Stage[main]/Nova::Db::Sync_cell_v2/Exec[nova-cell_v2-simple-cell-setup]/returns:
 Cell0 is already setup.
  Notice: 
/Stage[main]/Nova::Db::Sync_cell_v2/Exec[nova-cell_v2-simple-cell-setup]/returns:
 No hosts found to map to cell, exiting.

  The issue seems to be that it's running "nova-manage  cell_v2
  simple_cell_setup" as part of the nova database initialization when no
  compute nodes have been created but it returns 1 in that case [1].
  However, note that the previous steps (Cell0 mapping and schema
  migration) were successfully run.

  I think for nova bootstrap a reasonable orchestrated workflow would
  be:

  1. Create required databases (including the one for cell0).
  2. Nova db sync
  3. nova cell0 mapping and schema creation.
  4. Adding compute nodes
  5. mapping compute nodes (by running nova-manage cell_v2 discover_hosts)

  For step 3 we'd need to get simple_cell_setup to return 0 when not
  having compute nodes, or having a different command.

  With current behavior of nova-manage the only working workflow we can
  do is:

  1. Create required databases (including the one for cell0).
  2. Nova db sync
  3. Adding all compute nodes
  4. nova cell0 mapping and schema creation with "nova-manage cell_v2 
simple_cell_setup".

  Am I right?, Is there any better alternative?

  
  [1] 
https://github.com/openstack/nova/blob/master/nova/cmd/manage.py#L1112-L1114

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1656276/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656266] [NEW] A member can be created successfully with any value of parameter without any error

2017-01-13 Thread Kanika Singh
Public bug reported:

Logically, MEMBER_ID should be the tenant_id that has to be added as a
member. But in this case, MEMBER_ID parameter is not verified for
existance. One can give any value as MEMBER_ID, database entry will be
created for the specified value.

eg:

[root@controller ~(keystone_admin)]# glance member-create 
c03908a7-6166-4b2f-974e-ae9aa60f5472 abc
+--+--+-+
| Image ID | Member ID| 
Status  |
+--+--+-+
| c03908a7-6166-4b2f-974e-ae9aa60f5472 | abc  | 
pending |
+--+--+-+


This happens because there is no check for the validity of MEMBER_ID. The value 
is passed as it is given in the command.

There should be a feature to fetch tenant list and validate the entered
value of MEMBER_ID in create() method in glance/api/v2/image_members.py
file.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1656266

Title:
  A member can be created successfully with any value of 
  parameter without any error

Status in Glance:
  New

Bug description:
  Logically, MEMBER_ID should be the tenant_id that has to be added as a
  member. But in this case, MEMBER_ID parameter is not verified for
  existance. One can give any value as MEMBER_ID, database entry will be
  created for the specified value.

  eg:

  [root@controller ~(keystone_admin)]# glance member-create 
c03908a7-6166-4b2f-974e-ae9aa60f5472 abc
  
+--+--+-+
  | Image ID | Member ID| 
Status  |
  
+--+--+-+
  | c03908a7-6166-4b2f-974e-ae9aa60f5472 | abc  | 
pending |
  
+--+--+-+

  
  This happens because there is no check for the validity of MEMBER_ID. The 
value is passed as it is given in the command.

  There should be a feature to fetch tenant list and validate the
  entered value of MEMBER_ID in create() method in
  glance/api/v2/image_members.py file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1656266/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656256] [NEW] limit=-1 doesn't work

2017-01-13 Thread jichenjc
Public bug reported:

newton code , we don't allow limit = -1 (API layer call db and

[root@cmabvt compute] # nova help hypervisor-list
usage: nova hypervisor-list [--matching ] [--marker ]
[--limit ]

List hypervisors. (Supported by API versions '2.0' - '2.latest') [hint: use
'--os-compute-api-version' flag to show help message for proper version]

Optional arguments:
  --matching   List hypervisors matching the given . If
 matching is used limit and marker options will be
 ignored.
  --marker   The last hypervisor of the previous page; displays
 list of hypervisors after "marker".
  --limit Maximum number of hypervisors to display. If limit ==
 -1, all hypervisors will be displayed. If limit is
 bigger than 'osapi_max_limit' option of Nova API,
 limit 'osapi_max_limit' will be used instead.
[root@cmabvt compute] # nova hypervisor-list --limit -1
ERROR (BadRequest): Invalid input received: limit must be >= 0 (HTTP 400) 
(Request-ID: req-11889244-903f-446b-a712-241fada50e56)

this is because we have this validation here

def _get_int_param(request, param):
"""Extract integer param from request or fail."""
try:
int_param = utils.validate_integer(request.GET[param], param,
   min_value=0)
except exception.InvalidInput as e:
raise webob.exc.HTTPBadRequest(explanation=e.format_message())
return int_param

** Affects: nova
 Importance: Undecided
 Assignee: jichenjc (jichenjc)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => jichenjc (jichenjc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1656256

Title:
  limit=-1 doesn't work

Status in OpenStack Compute (nova):
  New

Bug description:
  newton code , we don't allow limit = -1 (API layer call db and

  [root@cmabvt compute] # nova help hypervisor-list
  usage: nova hypervisor-list [--matching ] [--marker ]
  [--limit ]

  List hypervisors. (Supported by API versions '2.0' - '2.latest') [hint: use
  '--os-compute-api-version' flag to show help message for proper version]

  Optional arguments:
--matching   List hypervisors matching the given . If
   matching is used limit and marker options will be
   ignored.
--marker   The last hypervisor of the previous page; displays
   list of hypervisors after "marker".
--limit Maximum number of hypervisors to display. If limit ==
   -1, all hypervisors will be displayed. If limit is
   bigger than 'osapi_max_limit' option of Nova API,
   limit 'osapi_max_limit' will be used instead.
  [root@cmabvt compute] # nova hypervisor-list --limit -1
  ERROR (BadRequest): Invalid input received: limit must be >= 0 (HTTP 400) 
(Request-ID: req-11889244-903f-446b-a712-241fada50e56)

  this is because we have this validation here

  def _get_int_param(request, param):
  """Extract integer param from request or fail."""
  try:
  int_param = utils.validate_integer(request.GET[param], param,
 min_value=0)
  except exception.InvalidInput as e:
  raise webob.exc.HTTPBadRequest(explanation=e.format_message())
  return int_param

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1656256/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656241] [NEW] got an unexpected keyword argument 'app_name'

2017-01-13 Thread Jack Ivanov
Public bug reported:

I'm going to deploy Newton and after I installed Keystone I got this
error:

# openstack --debug endpoint list
START with options: [u'--debug', u'endpoint', u'list']
options: Namespace(access_key='', access_secret='***', access_token='***', 
access_token_endpoint='', access_token_type='', auth_type='', 
auth_url='http://p025.domain.com:35357/v3', authorization_code='', cacert=None, 
cert='', client_id='', client_secret='***', cloud='', consumer_key='', 
consumer_secret='***', debug=True, default_domain='default', 
default_domain_id='', default_domain_name='', deferred_help=False, 
discovery_endpoint='', domain_id='', domain_name='', endpoint='', 
identity_provider='', identity_provider_url='', insecure=None, interface='', 
key='', log_file=None, old_profile=None, openid_scope='', 
os_baremetal_api_version='1.9', os_beta_command=False, 
os_compute_api_version='', os_container_infra_api_version='1', 
os_dns_api_version='2', os_identity_api_version='3', os_image_api_version='2', 
os_network_api_version='', os_object_api_version='', 
os_orchestration_api_version='1', os_project_id=None, os_project_name=None, 
os_volume_api_version='', os_workflow_api_version='2', 
 passcode='', password='***', profile=None, project_domain_id='', 
project_domain_name='default', project_id='', project_name='admin', 
protocol='', redirect_uri='', region_name='', timing=False, token='***', 
trust_id='', url='', user_domain_id='', user_domain_name='default', user_id='', 
username='admin', verbose_level=3, verify=None)
Auth plugin password selected
auth_config_hook(): {'auth_type': 'password', 'beta_command': False, 
u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', 
u'metering_api_version': u'2', 'auth_url': 'http://p025.domain.com:35357/v3', 
u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 
'networks': [], u'image_api_version': '2', 'verify': True, u'dns_api_version': 
'2', u'object_store_api_version': u'1', 'username': 'admin', 
'container_infra_api_version': '1', 'verbose_level': 3, 'region_name': '', 
'api_timeout': None, u'baremetal_api_version': '1.9', 'auth': 
{'user_domain_name': 'default', 'project_name': 'admin', 'project_domain_name': 
'default'}, 'default_domain': 'default', u'container_api_version': u'1', 
u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 
u'orchestration_api_version': '1', 'timing': False, 'password': '***', 
'cacert': None, u'key_manager_api_version': u'v1', 'workflow_api_version': '2', 
'deferred_help': False, u'identity_api_version': '3', 
 u'volume_api_version': u'2', 'cert': None, u'secgroup_source': u'neutron', 
u'status': u'active', 'debug': True, u'interface': None, 
u'disable_vendor_agent': {}}
defaults: {u'auth_type': 'password', u'status': u'active', 
u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', 
'api_timeout': None, u'baremetal_api_version': u'1', u'image_api_version': 
u'2', u'metering_api_version': u'2', u'image_api_use_tasks': False, 
u'floating_ip_source': u'neutron', u'orchestration_api_version': u'1', 
'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': 
u'qcow2', u'key_manager_api_version': u'v1', 'verify': True, 
u'identity_api_version': u'2.0', u'volume_api_version': u'2', 'cert': None, 
u'secgroup_source': u'neutron', u'container_api_version': u'1', 
u'dns_api_version': u'2', u'object_store_api_version': u'1', u'interface': 
None, u'disable_vendor_agent': {}}
cloud cfg: {'auth_type': 'password', 'beta_command': False, 
u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', 
u'metering_api_version': u'2', 'auth_url': 'http://p025.domain.com:35357/v3', 
u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 
'networks': [], u'image_api_version': '2', 'verify': True, u'dns_api_version': 
'2', u'object_store_api_version': u'1', 'username': 'admin', 
'container_infra_api_version': '1', 'verbose_level': 3, 'region_name': '', 
'api_timeout': None, u'baremetal_api_version': '1.9', 'auth': 
{'user_domain_name': 'default', 'project_name': 'admin', 'project_domain_name': 
'default'}, 'default_domain': 'default', u'container_api_version': u'1', 
u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 
u'orchestration_api_version': '1', 'timing': False, 'password': '***', 
'cacert': None, u'key_manager_api_version': u'v1', 'workflow_api_version': '2', 
'deferred_help': False, u'identity_api_version': '3', u'volume_
 api_version': u'2', 'cert': None, u'secgroup_source': u'neutron', u'status': 
u'active', 'debug': True, u'interface': None, u'disable_vendor_agent': {}}
compute API version 2, cmd group openstack.compute.v2
network API version 2, cmd group openstack.network.v2
image API version 2, cmd group openstack.image.v2
volume API version 2, cmd group openstack.volume.v2
identity API version 3, cmd group openstack.identity.v3
object_store API version 1, cmd group openstack.object_store.v1

[Yahoo-eng-team] [Bug 1656242] [NEW] nova live snapshot of rbd instance fails on xen hypervisor

2017-01-13 Thread ebl...@nde.ag
Public bug reported:

Description:
We use a Mitaka environment with one control and three compute nodes (all 
running on openSUSE Leap 42.1), the compute nodes are xen hypervisors, our 
storage backend is ceph (for nova, glance and cinder).

When we try to snapshot a running instance, it's always a cold snapshot,
nova-compute reports:

2017-01-12 12:55:51.919 [instance: 14b75237-7619-481f-9636-792b64d1be17] 
Beginning cold snapshot process
2017-01-12 12:59:27.085 [instance: 14b75237-7619-481f-9636-792b64d1be17] 
Snapshot image upload complete

On rbd level the live snapshot process works as expected, without any
downtime of the instance, we use it for our backup strategy for example.

With some additional log statements in /usr/lib/python2.7/site-
packages/nova/virt/libvirt/driver.py I found that nova always passes
hard coded hypervisor-driver "qemu" into the function
_host.has_min_version(), it always returns "false" so that
"live_snapshot" is disabled. Replacing host.HV_DRIVER_QEMU with
host.HV_DRIVER_XEN results in a working live snapshot:

---cut here---
compute1:~ # diff -u 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py.mod
--- /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py
2017-01-13 09:33:23.257525708 +0100
+++ /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py.mod
2017-01-13 09:33:46.349105366 +0100
@@ -1649,9 +1649,14 @@
 #   redundant because LVM supports only cold snapshots.
 #   It is necessary in case this situation changes in the
 #   future.
+if CONF.libvirt.virt_type == 'xen':
+hv_driver = host.HV_DRIVER_XEN
+else:
+hv_driver = host.HV_DRIVER_QEMU
+
 if (self._host.has_min_version(MIN_LIBVIRT_LIVESNAPSHOT_VERSION,
MIN_QEMU_LIVESNAPSHOT_VERSION,
-   host.HV_DRIVER_QEMU)
+   hv_driver)
  and source_type not in ('lvm')
  and not CONF.ephemeral_storage_encryption.enabled
  and not CONF.workarounds.disable_libvirt_livesnapshot):
---cut here---

nova-compute reports:

2017-01-12 17:20:22.760 [instance: 14b75237-7619-481f-9636-792b64d1be17] 
instance snapshotting
2017-01-12 17:20:24.049 [instance: 14b75237-7619-481f-9636-792b64d1be17] 
Beginning live snapshot process
2017-01-12 17:24:38.997 [instance: 14b75237-7619-481f-9636-792b64d1be17] 
Snapshot image upload complete

The versions we use:

compute1:~ # nova --version
3.3.0

compute1:~ # ceph --version
ceph version 0.94.7-84-g8e6f430 (8e6f430683e4d8293e31fd4eb6cb09be96960cfa)

compute1:~ # libvirtd --version
libvirtd (libvirt) 2.5.0

compute1:~ # qemu-img --version
qemu-img version 2.7.0((SUSE Linux)), Copyright (c) 2003-2016 Fabrice Bellard 
and the QEMU Project developers

compute1:~ # rpm -qa | grep xen
xen-4.7.0_12-461.1.x86_64

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1656242

Title:
  nova live snapshot of rbd instance fails on xen hypervisor

Status in OpenStack Compute (nova):
  New

Bug description:
  Description:
  We use a Mitaka environment with one control and three compute nodes (all 
running on openSUSE Leap 42.1), the compute nodes are xen hypervisors, our 
storage backend is ceph (for nova, glance and cinder).

  When we try to snapshot a running instance, it's always a cold
  snapshot, nova-compute reports:

  2017-01-12 12:55:51.919 [instance: 14b75237-7619-481f-9636-792b64d1be17] 
Beginning cold snapshot process
  2017-01-12 12:59:27.085 [instance: 14b75237-7619-481f-9636-792b64d1be17] 
Snapshot image upload complete

  On rbd level the live snapshot process works as expected, without any
  downtime of the instance, we use it for our backup strategy for
  example.

  With some additional log statements in /usr/lib/python2.7/site-
  packages/nova/virt/libvirt/driver.py I found that nova always passes
  hard coded hypervisor-driver "qemu" into the function
  _host.has_min_version(), it always returns "false" so that
  "live_snapshot" is disabled. Replacing host.HV_DRIVER_QEMU with
  host.HV_DRIVER_XEN results in a working live snapshot:

  ---cut here---
  compute1:~ # diff -u 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py.mod
  --- /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py
2017-01-13 09:33:23.257525708 +0100
  +++ /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py.mod
2017-01-13 09:33:46.349105366 +0100
  @@ -1649,9 +1649,14 @@
   #   redundant because LVM supports only cold snapshots.
   #   It is necessary in 

[Yahoo-eng-team] [Bug 1649762] Re: KeyError in vpn agent

2017-01-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/410530
Committed: 
https://git.openstack.org/cgit/openstack/neutron-vpnaas/commit/?id=b942ead930bc1811b9d6f759be343237e1547d0c
Submitter: Jenkins
Branch:master

commit b942ead930bc1811b9d6f759be343237e1547d0c
Author: YAMAMOTO Takashi 
Date:   Wed Dec 14 13:14:31 2016 +0900

Restore RPC after tenant_id -> project_id DB column rename

Closes-Bug: #1649762
Change-Id: Iaf99814082d122512e5ee17dd2dc6f6682c3d196


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649762

Title:
  KeyError in vpn agent

Status in neutron:
  Fix Released

Bug description:
  eg. http://logs.openstack.org/11/410511/2/check/gate-neutron-vpnaas-
  dsvm-api-ubuntu-xenial-nv/7e31cf8/logs/screen-neutron-vpnaas.txt.gz

  2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server 
[req-3b108eca-b4cb-470e-be8e-d20d2829974e tempest-BaseTestCase-373340779 -] 
Exception during message handling
  2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
  2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
155, in _process_incoming
  2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
222, in dispatch
  2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
192, in _do_dispatch
  2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
  2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/ipsec.py",
 line 884, in vpnservice_updated
  2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server 
self.sync(context, [router] if router else [])
  2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
271, in inner
  2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server return 
f(*args, **kwargs)
  2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/ipsec.py",
 line 1049, in sync
  2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server 
self.report_status(context)
  2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/ipsec.py",
 line 1005, in report_status
  2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server if not 
self.should_be_reported(context, process):
  2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/ipsec.py",
 line 999, in should_be_reported
  2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server 
process.vpnservice["tenant_id"] == context.tenant_id):
  2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server KeyError: 
'tenant_id'
  2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1649762/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656215] [NEW] Add qed disk format

2017-01-13 Thread yuyafei
Public bug reported:

QED is an image format (like qcow2, vmdk, etc) that supports backing files and 
sparse images.
http://wiki.qemu.org/Features/QED

** Affects: glance
 Importance: Undecided
 Assignee: yuyafei (yu-yafei)
 Status: New

** Affects: python-glanceclient
 Importance: Undecided
 Assignee: yuyafei (yu-yafei)
 Status: New


** Tags: qed

** Changed in: glance
 Assignee: (unassigned) => yuyafei (yu-yafei)

** Project changed: glance => python-glanceclient

** Also affects: glance
   Importance: Undecided
   Status: New

** Changed in: glance
 Assignee: (unassigned) => yuyafei (yu-yafei)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1656215

Title:
  Add qed disk format

Status in Glance:
  New
Status in Glance Client:
  New

Bug description:
  QED is an image format (like qcow2, vmdk, etc) that supports backing files 
and sparse images.
  http://wiki.qemu.org/Features/QED

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1656215/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp