[Yahoo-eng-team] [Bug 1670524] Re: auto allocate external network callback changes is_default

2017-03-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/442219
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=ea27e1aa1ed7ad802d1167c1c80ae51fcd5cee99
Submitter: Jenkins
Branch:master

commit ea27e1aa1ed7ad802d1167c1c80ae51fcd5cee99
Author: Armando Migliaccio 
Date:   Mon Mar 6 16:47:25 2017 -0800

Update is_default field only when specified in the request

Closes-bug: 1670524

Change-Id: Ie46fe7126693bcf6732332a479f4b2609c402c5d


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1670524

Title:
  auto allocate external network callback changes is_default

Status in neutron:
  Fix Released

Bug description:
  The code at [1] always gets executed on a network update so if you
  perform a network update to update something unrelated to the
  'is_default' column, 'is_default' will be changed to False. So
  completely unrelated network updates end up clearing the is_default
  value.

  
  1. 
https://github.com/openstack/neutron/blob/ca751a1486812e498d9539980d8cd7bf6d7ebab7/neutron/services/auto_allocate/db.py#L68-L70

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1670524/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1670950] [NEW] nova can't bind to dpdk (vhostuser) socket

2017-03-07 Thread sapd
Public bug reported:

Hi everyone. I'm installing Openstack Ocata version with OVS-DPDK. 
I installed openvswitch-switch-dpdk packages from ubuntu repository. 
I setup dpdk use this tutorial: 
https://help.ubuntu.com/lts/serverguide/DPDK.html
And When I create a vhostuser port, It works and its socket is correct 
permission.
But when I launch an instance on this compute node. It has failed. 

2017-03-07 03:55:44.381 21100 ERROR nova.compute.manager [instance:
5c5c4eae-8f07-4128-9f84-0f2ed87319d9] libvirtError: internal error:
process exited while connecting to monitor: 2017-03-07T08:55:43.986446Z
qemu-system-x86_64: -chardev
socket,id=charnet0,path=/var/run/openvswitch/vhost_socket/vhufcf54d1f-
4b,server: Failed to bind socket to /var/run/openvswitch/vhost_socket
/vhufcf54d1f-4b: Permission denied

Here is full log: http://paste.ubuntu.com/24129861/
OVS_DPDK version: openvswitch-switch-dpdk   2.6.1-0ubuntu5~cloud0
DPDK Version:  dpdk  16.11-1ubuntu3~cloud0
Thanks

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1670950

Title:
  nova can't bind to dpdk (vhostuser) socket

Status in OpenStack Compute (nova):
  New

Bug description:
  Hi everyone. I'm installing Openstack Ocata version with OVS-DPDK. 
  I installed openvswitch-switch-dpdk packages from ubuntu repository. 
  I setup dpdk use this tutorial: 
https://help.ubuntu.com/lts/serverguide/DPDK.html
  And When I create a vhostuser port, It works and its socket is correct 
permission.
  But when I launch an instance on this compute node. It has failed. 

  2017-03-07 03:55:44.381 21100 ERROR nova.compute.manager [instance:
  5c5c4eae-8f07-4128-9f84-0f2ed87319d9] libvirtError: internal error:
  process exited while connecting to monitor:
  2017-03-07T08:55:43.986446Z qemu-system-x86_64: -chardev
  socket,id=charnet0,path=/var/run/openvswitch/vhost_socket/vhufcf54d1f-
  4b,server: Failed to bind socket to /var/run/openvswitch/vhost_socket
  /vhufcf54d1f-4b: Permission denied

  Here is full log: http://paste.ubuntu.com/24129861/
  OVS_DPDK version: openvswitch-switch-dpdk   2.6.1-0ubuntu5~cloud0
  DPDK Version:  dpdk  16.11-1ubuntu3~cloud0
  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1670950/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1670947] [NEW] Horizon Images page is blank

2017-03-07 Thread Madhur Khurana
Public bug reported:

Install openstack newton release on Ubuntu 16.04 LTS. All openstack services 
are up and running.
Issue is Images page is blank on Horizon Dashboard.

Steps:

Log in to horizon dashboard. 
Go to Project > Compute > Images. 
Then images page is blank.

Using CLI images are showing correctly.

root@ubuntu:/home/stack# openstack image list
+--+-++
| ID   | Name| 
Status |
+--+-++
| 50e2902f-280a-4c91-ac87-71c64fe136ad | OpenWRT | 
active |
| e18422ca-94d5-4145-aa59-e79ef2373fc4 | cirros-0.3.4-x86_64-uec | 
active |
| 1387d1bd-9c36-48cd-af91-b62b5e5d7248 | cirros-0.3.4-x86_64-uec-ramdisk | 
active |
| 1ef3b2b5-669a-4283-8a65-1e3b727ce00d | cirros-0.3.4-x86_64-uec-kernel  | 
active |
+--+-++

Openstack Version: 3.2.1

Attaching the screenshot for the same.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "Dashboard.JPG"
   
https://bugs.launchpad.net/bugs/1670947/+attachment/4833396/+files/Dashboard.JPG

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1670947

Title:
  Horizon Images page is blank

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Install openstack newton release on Ubuntu 16.04 LTS. All openstack services 
are up and running.
  Issue is Images page is blank on Horizon Dashboard.

  Steps:

  Log in to horizon dashboard. 
  Go to Project > Compute > Images. 
  Then images page is blank.

  Using CLI images are showing correctly.

  root@ubuntu:/home/stack# openstack image list
  
+--+-++
  | ID   | Name| 
Status |
  
+--+-++
  | 50e2902f-280a-4c91-ac87-71c64fe136ad | OpenWRT | 
active |
  | e18422ca-94d5-4145-aa59-e79ef2373fc4 | cirros-0.3.4-x86_64-uec | 
active |
  | 1387d1bd-9c36-48cd-af91-b62b5e5d7248 | cirros-0.3.4-x86_64-uec-ramdisk | 
active |
  | 1ef3b2b5-669a-4283-8a65-1e3b727ce00d | cirros-0.3.4-x86_64-uec-kernel  | 
active |
  
+--+-++

  Openstack Version: 3.2.1

  Attaching the screenshot for the same.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1670947/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1669372] Re: trunk connectivity scenario test is wrong

2017-03-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/439727
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=ed5fc12ab7ed5404c3eba49874e68b0adde33b67
Submitter: Jenkins
Branch:master

commit ed5fc12ab7ed5404c3eba49874e68b0adde33b67
Author: Kevin Benton 
Date:   Wed Mar 1 09:14:45 2017 -0800

Fix trunk subport scenario test

This test wasn't allowing ping before asserting
that ping should work. It only passes in the OVS
scenario because the job is configuring to use the
hybrid iptables driver which does not filter vlan-aware-vms
on OVS.

Closes-Bug: #1669372
Change-Id: Ia70360d664d5efd4154c0eb843143f1567785d59


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1669372

Title:
  trunk connectivity scenario test is wrong

Status in neutron:
  Fix Released

Bug description:
  The scenario test for trunk connectivity is incorrect because it
  asserts that pings should pass without adding a rule that actually
  allows pings. This passes on OVSFW because the job is currently
  misconfigured to load the wrong firewall driver required for trunk
  filtering.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1669372/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1670868] [NEW] AttributeError: 'MidonetL3ServicePlugin' object has no attribute 'get_l3_agents_hosting_routers'

2017-03-07 Thread YAMAMOTO Takashi
Public bug reported:

neutron-fwaas should not assume L3 plugin has get_l3_agents_hosting_routers.
the assumption was introduced by Id6cb991aee959319997bb15ece240c09d4ac5e39 .
MidonetL3ServicePlugin doesn't have it.

eg. http://logs.openstack.org/92/440892/1/check/gate-networking-midonet-
python27-ubuntu-xenial/6a4b725/testr_results.html.gz

Traceback (most recent call last):
  File 
"/home/jenkins/workspace/gate-networking-midonet-python27-ubuntu-xenial/.tmp/tmp.zBhXnCshkM/openstack/neutron/neutron/api/v2/resource.py",
 line 79, in resource
result = method(request=request, **args)
  File 
"/home/jenkins/workspace/gate-networking-midonet-python27-ubuntu-xenial/.tmp/tmp.zBhXnCshkM/openstack/neutron/neutron/api/v2/base.py",
 line 436, in create
return self._create(request, body, **kwargs)
  File 
"/home/jenkins/workspace/gate-networking-midonet-python27-ubuntu-xenial/.tmp/tmp.zBhXnCshkM/openstack/neutron/neutron/db/api.py",
 line 93, in wrapped
setattr(e, '_RETRY_EXCEEDED', True)
  File 
"/home/jenkins/workspace/gate-networking-midonet-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
self.force_reraise()
  File 
"/home/jenkins/workspace/gate-networking-midonet-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
six.reraise(self.type_, self.value, self.tb)
  File 
"/home/jenkins/workspace/gate-networking-midonet-python27-ubuntu-xenial/.tmp/tmp.zBhXnCshkM/openstack/neutron/neutron/db/api.py",
 line 89, in wrapped
return f(*args, **kwargs)
  File 
"/home/jenkins/workspace/gate-networking-midonet-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_db/api.py",
 line 151, in wrapper
ectxt.value = e.inner_exc
  File 
"/home/jenkins/workspace/gate-networking-midonet-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
self.force_reraise()
  File 
"/home/jenkins/workspace/gate-networking-midonet-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
six.reraise(self.type_, self.value, self.tb)
  File 
"/home/jenkins/workspace/gate-networking-midonet-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_db/api.py",
 line 139, in wrapper
return f(*args, **kwargs)
  File 
"/home/jenkins/workspace/gate-networking-midonet-python27-ubuntu-xenial/.tmp/tmp.zBhXnCshkM/openstack/neutron/neutron/db/api.py",
 line 129, in wrapped
traceback.format_exc())
  File 
"/home/jenkins/workspace/gate-networking-midonet-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
self.force_reraise()
  File 
"/home/jenkins/workspace/gate-networking-midonet-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
six.reraise(self.type_, self.value, self.tb)
  File 
"/home/jenkins/workspace/gate-networking-midonet-python27-ubuntu-xenial/.tmp/tmp.zBhXnCshkM/openstack/neutron/neutron/db/api.py",
 line 124, in wrapped
return f(*dup_args, **dup_kwargs)
  File 
"/home/jenkins/workspace/gate-networking-midonet-python27-ubuntu-xenial/.tmp/tmp.zBhXnCshkM/openstack/neutron/neutron/api/v2/base.py",
 line 549, in _create
obj = do_create(body)
  File 
"/home/jenkins/workspace/gate-networking-midonet-python27-ubuntu-xenial/.tmp/tmp.zBhXnCshkM/openstack/neutron/neutron/api/v2/base.py",
 line 531, in do_create
request.context, reservation.reservation_id)
  File 
"/home/jenkins/workspace/gate-networking-midonet-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
self.force_reraise()
  File 
"/home/jenkins/workspace/gate-networking-midonet-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
six.reraise(self.type_, self.value, self.tb)
  File 
"/home/jenkins/workspace/gate-networking-midonet-python27-ubuntu-xenial/.tmp/tmp.zBhXnCshkM/openstack/neutron/neutron/api/v2/base.py",
 line 524, in do_create
return obj_creator(request.context, **kwargs)
  File 
"/home/jenkins/workspace/gate-networking-midonet-python27-ubuntu-xenial/.tmp/tmp.ahfzmovvD0/openstack/neutron-fwaas/neutron_fwaas/services/firewall/fwaas_plugin.py",
 line 277, in create_firewall
hosts = self._get_hosts_to_notify(context, fw_new_rtrs)
  File 
"/home/jenkins/workspace/gate-networking-midonet-python27-ubuntu-xenial/.tmp/tmp.ahfzmovvD0/openstack/neutron-fwaas/neutron_fwaas/services/firewall/fwaas_plugin.py",
 line 175, in _get_hosts_to_notify
nl_constants.L3).get_l3_agents_hosting_routers(
AttributeError: 'MidonetL3ServicePlugin' object has no attribute 
'get_l3_agents_hosting_routers'

** Affects: networking-midonet
 Importance: Critical
 Assignee: YAMAMOTO Takashi (yam

[Yahoo-eng-team] [Bug 1669875] Re: identify openstack vmware platform

2017-03-07 Thread Scott Moser
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1669875

Title:
  identify openstack vmware platform

Status in cloud-init:
  Confirmed
Status in OpenStack Compute (nova):
  New

Bug description:
  We need a way to identify locally that we are running on openstack
  when inside a guest of VmWare.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1669875/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645910] Re: Trust creation for SSO users fails in assert_user_enabled

2017-03-07 Thread Lance Bragstad
With https://review.openstack.org/#/c/399684/ implemented, this should
no longer be an issue. Federated users should resolve to a domain, and
in the default case, the domain of the identity provider. This is the
behavior as of the Ocata release.

** Changed in: keystone
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1645910

Title:
  Trust creation for SSO users fails in assert_user_enabled

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  Openstack version: Mitaka
  Operation: Heat stack/trust creation for SSO users

  For SSO users, keystone trust creation workflow fails while asserting
  that the user is enabled.

  The assert_user_enabled() function in keystone/identity/core.py fails at the 
below line:
  self.resource_api.assert_domain_enabled(user['domain_id'])

  Since user['domain_id'] throws a KeyError for federated users, this
  function raises an exception. To avoid this failure, we should invoke
  assert_domain_enabled() check conditionally only for local users.

  Proposing to add a 'is_local' user flag to distinguish between local
  and federated users so that we can conditionally assert the user
  domain and do other such things.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1645910/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1659760] Re: General scale issue on neutron-fwaas due to RPC broadcast usage (fanout)

2017-03-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/426287
Committed: 
https://git.openstack.org/cgit/openstack/neutron-fwaas/commit/?id=da425fd913c168d5477588df5d8574fce21e2eb7
Submitter: Jenkins
Branch:master

commit da425fd913c168d5477588df5d8574fce21e2eb7
Author: Bertrand Lallau 
Date:   Fri Jan 27 16:52:09 2017 +0100

Fix RPC scale issue using cast instead of fanout v1

Actually all CRUDs methods used on FWaaS v1 resources (Firewall,
FirewallPolicy, FirewallRule) results on AMQP fanout cast requests
sent to all L3 agents (even if they don't have routers or firewalls).

This fix send AMQP cast only to L3 agents affected by the corresponding
firewall.

Such trouble also impacts FWaaS v2 and will be solved in a follow-up
change.

Change-Id: Id6cb991aee959319997bb15ece240c09d4ac5e39
Closes-Bug: #1659760


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1659760

Title:
  General scale issue on neutron-fwaas due to RPC broadcast usage
  (fanout)

Status in neutron:
  Fix Released

Bug description:
  Actually on all CRUDs methods used on FWaaS resources (Firewall, 
FirewallPolicy, FirewallRule, Firewallgroup, ...) an AMQP fanout cast is sent 
to all L3 agents.
  This is a wrong design, AMPQ cast should be send only to L3Agents managing 
routers with firewalls related to the tenant.

  This wrong design result in many bugs already reported:

  1) FirewallNotFound during firewall_deleted
  https://bugs.launchpad.net/neutron/+bug/1622460
  https://bugs.launchpad.net/neutron/+bug/1658060

  Explanation using 2 L3agents:
  agent1: host router with firewall for tenant
  agent2: doesn't host tenant router

    1. neutron firewall-delete 
    2. neutron-server send an AMQP call "delete_firewall" to agent1 and agent2
    3. agent1 clean router firewall and send back "firewall_deleted" to 
neutron-server
    4. neutron-server delete firewall resource from database
    5. agent2 has nothing to clean and send back firewall_deleted to 
neutron-server
    6. neutron-server get an exception "FirewallNotFound"
   http://paste.openstack.org/raw/94663/

    But this is not ended :(
    7. agent2 get back the "FirewallNotfound" exception
    8. on RPC error it will performed a kind of "full synchronisation" 
(process_services_sync)
   send an AMQP call "get_tenants_with_firewalls"
    9. neutron-server will respond back with a ALL tenants (even if it's not 
related to this agents)
    10 FOR each tenant agent2 will sent a AMQP call:
   get_firewalls_for_tenant()

  Full sync bug is already reported here:
  https://bugs.launchpad.net/neutron/+bug/1618244

  2) Intermittent failed on Tempest check is probably link:
  https://bugs.launchpad.net/neutron/+bug/1649703

  3) More generally on FWaaS CRUDs operations neutron-server flood and is 
flooded by many AMQP requests.
  => this result in neutron-server RPC worker fully busy
  => AMQP messages accumulated in q-firewall-plugin queue
  => RPC Timeout appears on agents after (60s)
  => full synchronisation triggered
  => etc, etc...

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1659760/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1670809] [NEW] Table with no items to display blinks if filter input gets clicked

2017-03-07 Thread Eddie Ramirez
Public bug reported:

How to reproduce:
1. Go to a panel that uses "client filters", for example, Compute -> Volumes.
2. Click on the search input that lets you filter rows in the table. Make sure 
this table is empty - You'll see one row with "No items to display."
3. See how the row disappears as soon as you release the click on the search 
input.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1670809

Title:
  Table with no items to display blinks if filter input gets clicked

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  How to reproduce:
  1. Go to a panel that uses "client filters", for example, Compute -> Volumes.
  2. Click on the search input that lets you filter rows in the table. Make 
sure this table is empty - You'll see one row with "No items to display."
  3. See how the row disappears as soon as you release the click on the search 
input.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1670809/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1670419] Re: placement_database config option help is wrong

2017-03-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/442035
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=7439e9473617a871c032107ab85b1b623369975f
Submitter: Jenkins
Branch:master

commit 7439e9473617a871c032107ab85b1b623369975f
Author: Matt Riedemann 
Date:   Mon Mar 6 20:25:55 2017 -0500

Remove unused placement_database config options

The placement_database config options were added in Newton
but the actual code to use the options was reverted and is
not used. The help text actually has a typo (Ocata was 15.0.0,
not 14.0.0) and, more importantly, the help text makes it sound
like you should really be setting up a separate database for the
placement service but we don't actually use these options, we use
the api_database options for all of the placement data models.

To avoid confusion until this is actually supported, let's just
remove the options which should have been cleaned up as part of
39fb302fd9c8fc57d3e4bea1c60a02ad5067163f anyway.

Change-Id: I31293ac4689630e4113588ab2c6373cf572b8f38
Closes-Bug: #1670419


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1670419

Title:
  placement_database config option help is wrong

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  New
Status in OpenStack Compute (nova) ocata series:
  In Progress

Bug description:
  The help on the [placement_database] config options is wrong, it
  mentions Ocata 14.0.0 but 14.0.0 is actually Newton, Ocata was 15.0.0:

  "# The *Placement API Database* is a separate database which is used for the 
new
  # placement-api service. In Ocata release (14.0.0) this database is optional:"

  It also has some scary words about configuring it with a separate
  database so you don't have to deal with data migration issues later to
  migrate data from the nova_api database to a separate placement
  database, but the placement_database options are not actually used in
  code. They will be when this blueprint is complete:

  https://blueprints.launchpad.net/nova/+spec/optional-placement-
  database

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1670419/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1629371] Re: libvirt: volume snapshot stacktraces if the qemu guest agent is not available

2017-03-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/380433
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=e659a6e7cbb3078b47c0860465530eab05acf73e
Submitter: Jenkins
Branch:master

commit e659a6e7cbb3078b47c0860465530eab05acf73e
Author: Matt Riedemann 
Date:   Fri Sep 30 12:16:41 2016 -0400

libvirt: check if we can quiesce before volume-backed snapshot

The NFS job in the experimental queue is stacktracing when taking
a volume-backed snapshot because the image used in the job does
not have the qemu guest agent in it. This is not fatal because the
code just continues to snapshot without quiesce.

Since we have code that can tell if we can even attempt a quiesce
we should use that here before attempting the snapshot with quiesce,
else don't even attempt it so we don't stacktrace.

Note that this also adds a check such that if the image requires
quiesce as part of a snapshot and we can't honor that, we fail.

The stacktrace is intentionally left in here in case the snapshot
with quiesce fails because we checked if we could and only get to
that point if we should be able to quiesce but it failed, which
is an error that should be investigated.

Also, the unused quiesce parameter is removed from the
test_volume_snapshot_create_libgfapi test while in this code.

Change-Id: I685c592340dc8a3b7d32834a0114f2925dc4305c
Closes-Bug: #1629371


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1629371

Title:
  libvirt: volume snapshot stacktraces if the qemu guest agent is not
  available

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The libvirt driver attempts to quiece the filesystem when doing a
  volume snapshot and will stacktrace if the qemu guest agent is not
  available:

  http://logs.openstack.org/86/147186/41/experimental/gate-tempest-dsvm-
  full-devstack-plugin-nfs-
  nv/fe58d89/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-09-30_10_36_32_039

  2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver 
[req-93930c3c-4482-4c30-97ad-4dbcd251a9ba nova service] [instance: 
01211c7e-b3ac-4d17-80bb-30fbe8494c08] Unable to create quiesced VM snapshot, 
attempting again with quiescing disabled.
  2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [instance: 
01211c7e-b3ac-4d17-80bb-30fbe8494c08] Traceback (most recent call last):
  2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [instance: 
01211c7e-b3ac-4d17-80bb-30fbe8494c08]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 1827, in 
_volume_snapshot_create
  2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [instance: 
01211c7e-b3ac-4d17-80bb-30fbe8494c08] reuse_ext=True, quiesce=True)
  2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [instance: 
01211c7e-b3ac-4d17-80bb-30fbe8494c08]   File 
"/opt/stack/new/nova/nova/virt/libvirt/guest.py", line 508, in snapshot
  2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [instance: 
01211c7e-b3ac-4d17-80bb-30fbe8494c08] 
self._domain.snapshotCreateXML(conf.to_xml(), flags=flags)
  2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [instance: 
01211c7e-b3ac-4d17-80bb-30fbe8494c08]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit
  2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [instance: 
01211c7e-b3ac-4d17-80bb-30fbe8494c08] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
  2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [instance: 
01211c7e-b3ac-4d17-80bb-30fbe8494c08]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in 
proxy_call
  2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [instance: 
01211c7e-b3ac-4d17-80bb-30fbe8494c08] rv = execute(f, *args, **kwargs)
  2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [instance: 
01211c7e-b3ac-4d17-80bb-30fbe8494c08]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute
  2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [instance: 
01211c7e-b3ac-4d17-80bb-30fbe8494c08] six.reraise(c, e, tb)
  2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [instance: 
01211c7e-b3ac-4d17-80bb-30fbe8494c08]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker
  2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [instance: 
01211c7e-b3ac-4d17-80bb-30fbe8494c08] rv = meth(*args, **kwargs)
  2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [instance: 
01211c7e-b3ac-4d17-80bb-30fbe8494c08]   File 
"/usr/local/lib/python2.7/dist-packages/libvirt.py", line 2592, in 
snapshotCreateXML
  2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvi

[Yahoo-eng-team] [Bug 1670789] [NEW] Menu entries missing from Horizon if cinder v1 endpoint is deleted

2017-03-07 Thread Radoslav
Public bug reported:

In the case where there is no cinder v1 endpoint created (or deleted -
use openstack service delete cinder to reproduce) certain volume related
menu entries are not displayed - Create volume from instance snapshot
and create instance from volume - see attachments.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1670789

Title:
  Menu entries missing from Horizon if cinder v1 endpoint is deleted

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In the case where there is no cinder v1 endpoint created (or deleted -
  use openstack service delete cinder to reproduce) certain volume
  related menu entries are not displayed - Create volume from instance
  snapshot and create instance from volume - see attachments.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1670789/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1670786] [NEW] Incorrect usage reported in Horizon for downsized Vms

2017-03-07 Thread Radoslav
Public bug reported:

When instance is downsized storage does not shrink. When one views quota
usage in horizon it shows the current flavor disk size which is not
correct. Actual disk size is the biggest flavor disk size the instance
has been resized to.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1670786

Title:
  Incorrect usage reported in Horizon for downsized Vms

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When instance is downsized storage does not shrink. When one views
  quota usage in horizon it shows the current flavor disk size which is
  not correct. Actual disk size is the biggest flavor disk size the
  instance has been resized to.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1670786/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1670770] [NEW] Warnings found when importing API Definitions from neutron-lib

2017-03-07 Thread Reedip
Public bug reported:

Issue:
While executing tox -v e py27 on a patch set, I found the following issue: 
http://paste.openstack.org/show/601800/


Expected Result: No warning logs should be generated when loading only 
definitions from neutron-lib as they do not have Class name which an extension 
expects

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lib

** Tags added: lib

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1670770

Title:
  Warnings found when importing API Definitions from neutron-lib

Status in neutron:
  New

Bug description:
  Issue:
  While executing tox -v e py27 on a patch set, I found the following issue: 
http://paste.openstack.org/show/601800/

  
  Expected Result: No warning logs should be generated when loading only 
definitions from neutron-lib as they do not have Class name which an extension 
expects

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1670770/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1670522] Re: TypeError booting instance with libvirt+xen in _create_pty_device with libvirt>=1.3.3

2017-03-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/442209
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=ac61abb7c74f1a8e3e9134bc045455fd9fdac0fa
Submitter: Jenkins
Branch:master

commit ac61abb7c74f1a8e3e9134bc045455fd9fdac0fa
Author: Matt Riedemann 
Date:   Mon Mar 6 19:00:31 2017 -0500

libvirt: pass log_path to _create_pty_device for non-kvm/qemu

log_path is required in _create_pty_device if:

1. serial consoles are disabled
2. libvirt/qemu are new enough that they support virtlogd

This was working fine for kvm and qemu since _create_consoles_s390x
and _create_consoles_qemu_kvm pass in the log_path, but for the
non-kvm/qemu cases, like xen, the log_path wasn't provided.

This wasn't caught by the XenProject CI since it's using libvirt
1.3.1 which does not have virtlogd support so this path was
not exercised and apparently not unit tested either.

A release note is provided since this is a pretty severe bug if
you're running new enough libvirt/qemu and not using kvm/qemu as
the virt type because CONF.serial_console.enabled is False by
default so you're going to have failed server creates immediately
upon upgrading to Ocata.

Change-Id: I7f60db1d243a75b90e3c0e53201cb6000ee95778
Closes-Bug: #1670522


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1670522

Title:
  TypeError booting instance with libvirt+xen in _create_pty_device with
  libvirt>=1.3.3

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  In Progress

Bug description:
  Someone reported this in the nova IRC channel today using ocata with
  libvirt+xen:

  http://paste.openstack.org/raw/601670/

  2017-03-06 17:09:47.609 8530 ERROR nova.compute.manager 
[req-937dedd1-35a1-46e4-8516-fc53c99a8f48 - - - - -] [instance: 
a2ac1972-54c4-4a7b-ab34-04cce319806c] Instance failed to spawn
  2017-03-06 17:09:47.609 8530 ERROR nova.compute.manager [instance: 
a2ac1972-54c4-4a7b-ab34-04cce319806c] Traceback (most recent call last):
  2017-03-06 17:09:47.609 8530 ERROR nova.compute.manager [instance: 
a2ac1972-54c4-4a7b-ab34-04cce319806c]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2125, in 
_build_resources
  2017-03-06 17:09:47.609 8530 ERROR nova.compute.manager [instance: 
a2ac1972-54c4-4a7b-ab34-04cce319806c] yield resources
  2017-03-06 17:09:47.609 8530 ERROR nova.compute.manager [instance: 
a2ac1972-54c4-4a7b-ab34-04cce319806c]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1930, in 
_build_and_run_instance
  2017-03-06 17:09:47.609 8530 ERROR nova.compute.manager [instance: 
a2ac1972-54c4-4a7b-ab34-04cce319806c] block_device_info=block_device_info)
  2017-03-06 17:09:47.609 8530 ERROR nova.compute.manager [instance: 
a2ac1972-54c4-4a7b-ab34-04cce319806c]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2683, in 
spawn
  2017-03-06 17:09:47.609 8530 ERROR nova.compute.manager [instance: 
a2ac1972-54c4-4a7b-ab34-04cce319806c] block_device_info=block_device_info)
  2017-03-06 17:09:47.609 8530 ERROR nova.compute.manager [instance: 
a2ac1972-54c4-4a7b-ab34-04cce319806c]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 4871, in 
_get_guest_xml
  2017-03-06 17:09:47.609 8530 ERROR nova.compute.manager [instance: 
a2ac1972-54c4-4a7b-ab34-04cce319806c] xml = conf.to_xml()
  2017-03-06 17:09:47.609 8530 ERROR nova.compute.manager [instance: 
a2ac1972-54c4-4a7b-ab34-04cce319806c]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/config.py", line 77, in 
to_xml
  2017-03-06 17:09:47.609 8530 ERROR nova.compute.manager [instance: 
a2ac1972-54c4-4a7b-ab34-04cce319806c] root = self.format_dom()
  2017-03-06 17:09:47.609 8530 ERROR nova.compute.manager [instance: 
a2ac1972-54c4-4a7b-ab34-04cce319806c]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/config.py", line 2161, in 
format_dom
  2017-03-06 17:09:47.609 8530 ERROR nova.compute.manager [instance: 
a2ac1972-54c4-4a7b-ab34-04cce319806c] self._format_devices(root)
  2017-03-06 17:09:47.609 8530 ERROR nova.compute.manager [instance: 
a2ac1972-54c4-4a7b-ab34-04cce319806c]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/config.py", line 2119, in 
_format_devices
  2017-03-06 17:09:47.609 8530 ERROR nova.compute.manager [instance: 
a2ac1972-54c4-4a7b-ab34-04cce319806c] devices.append(dev.format_dom())
  2017-03-06 17:09:47.609 8530 ERROR nova.compute.manager [instance: 
a2ac1972-54c4-4a7b-ab34-04cce319806c]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/config.py", line 1636, in 
format_dom
  2017-03-06 17:09:47.609 8530 ERROR nova.compute.manager [instance: 
a2ac1972-54c4-4a7b-ab34-04c

[Yahoo-eng-team] [Bug 1670585] Re: lbaas-agent: 'ascii' codec can't encode characters

2017-03-07 Thread Anindita Das
** Changed in: neutron
   Status: New => Invalid

** Also affects: octavia
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1670585

Title:
  lbaas-agent: 'ascii' codec can't encode characters

Status in neutron:
  Invalid
Status in octavia:
  New

Bug description:
  version: liberty

  1). Once the Chinese characters are used as the loadbalance name, there will 
be the following error:
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
[req-4a3f6b62-c449-4d88-82d1-96b8b96c7307 18295a4db5364daaa9f27e1169b96926 
65fe786567a341829aa05751b2b7360f - - -] Create listener 
75fef462-fe18-46a3-9722-6db2cf0be8ea failed on device driver haproxy_ns
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
Traceback (most recent call last):
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/agent/agent_manager.py", line 
300, in create_listener
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
driver.listener.create(listener)
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 405, in create
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
self.driver.loadbalancer.refresh(listener.loadbalancer)e
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 369, in refresh
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager if 
(not self.driver.deploy_instance(loadbalancer) and
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 254, in 
inner
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
return f(*args, **kwargs)
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 174, in deploy_instance
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
self.create(loadbalancer)
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 202, in create
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
self._spawn(loadbalancer)
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 352, in _spawn
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
haproxy_base_dir)
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/jinja_cfg.py",
 line 90, in save_config
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
utils.replace_file(conf_path, config_str)
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 192, in 
replace_file
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
tmp_file.write(data)
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib64/python2.7/socket.py", line 316, in write
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
data = str(data) # XXX Should really reject non-string non-buffers
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
UnicodeEncodeError: 'ascii' codec can't encode characters in position 20-43: 
ordinal not in range(128)
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1670585/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1670738] [NEW] man page needed for "nova-manage db online_data_migrations"

2017-03-07 Thread Matt Riedemann
Public bug reported:

This came up in the openstack-operators mailing list:

http://lists.openstack.org/pipermail/openstack-
operators/2017-March/012864.html

There is confusion around the online_data_migrations CLI for knowing
when it's complete. There are sort of two answers:

1. There is no summary table output.

2. The command exits with 0 meaning there were no migrations performed.
This is probably the best case to follow when checking/scripting for
this.

We need a man page explaining this in the nova docs:

https://github.com/openstack/nova/blob/master/doc/source/man/nova-
manage.rst

** Affects: nova
 Importance: Medium
 Status: Triaged


** Tags: doc low-hanging-fruit nova-manage

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1670738

Title:
  man page needed for "nova-manage db online_data_migrations"

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  This came up in the openstack-operators mailing list:

  http://lists.openstack.org/pipermail/openstack-
  operators/2017-March/012864.html

  There is confusion around the online_data_migrations CLI for knowing
  when it's complete. There are sort of two answers:

  1. There is no summary table output.

  2. The command exits with 0 meaning there were no migrations
  performed. This is probably the best case to follow when
  checking/scripting for this.

  We need a man page explaining this in the nova docs:

  https://github.com/openstack/nova/blob/master/doc/source/man/nova-
  manage.rst

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1670738/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1670737] [NEW] crashed, cannot create new instance

2017-03-07 Thread Tran Van Hoang
Public bug reported:

Environment: Newton Version on Ubuntu 16.04
root@node1:~# dpkg -l | grep nova
ii  nova-api2:14.0.2-0ubuntu1~cloud0  
all  OpenStack Compute - API frontend
ii  nova-common 2:14.0.2-0ubuntu1~cloud0  
all  OpenStack Compute - common files
ii  nova-compute2:14.0.2-0ubuntu1~cloud0  
all  OpenStack Compute - compute node base
ii  nova-compute-kvm2:14.0.2-0ubuntu1~cloud0  
all  OpenStack Compute - compute node (KVM)
ii  nova-compute-libvirt2:14.0.2-0ubuntu1~cloud0  
all  OpenStack Compute - compute node libvirt support
ii  nova-conductor  2:14.0.2-0ubuntu1~cloud0  
all  OpenStack Compute - conductor service
ii  nova-consoleauth2:14.0.2-0ubuntu1~cloud0  
all  OpenStack Compute - Console Authenticator
ii  nova-novncproxy 2:14.0.2-0ubuntu1~cloud0  
all  OpenStack Compute - NoVNC proxy
ii  nova-scheduler  2:14.0.2-0ubuntu1~cloud0  
all  OpenStack Compute - virtual machine scheduler
ii  python-nova 2:14.0.2-0ubuntu1~cloud0  
all  OpenStack Compute Python libraries
ii  python-novaclient   2:6.0.0-0ubuntu1~cloud0   
all  client library for OpenStack Compute API - Python 2.7

node1: controller, nova, neutron, glance (KVM)
node2: neutron (KVM)

using Linux bridge to implement Provider Network. Everything is ok, but
I get error message when try to create new instance and a dialog tell me
upload nova-api log to Launchpad.net

#root@node1:~# openstack compute service list
#++--+---+--+-+---++
#| ID | Binary   | Host  | Zone | Status  | State | Updated At  
   |
#++--+---+--+-+---++
#|  7 | nova-consoleauth | node1 | internal | enabled | up| 
2017-03-06T18:34:17.00 |
#|  9 | nova-scheduler   | node1 | internal | enabled | up| 
2017-03-06T18:34:17.00 |
#| 10 | nova-conductor   | node1 | internal | enabled | up| 
2017-03-06T18:34:17.00 |
#| 11 | nova-compute | node1 | nova | enabled | up| 
2017-03-06T18:34:15.00 |
#| 12 | nova-compute | node2 | nova | enabled | up| 
2017-03-06T18:34:20.00 |
#++--+---+--+-+---++

#root@node1:~# openstack network agent list
#+--++---+---+---+---+---+
#| ID   | Agent Type | Host  | 
Availability Zone | Alive | State | Binary|
#+--++---+---+---+---+---+
#| 0f395985-05a0-48e3-a353-278e98fbfd21 | DHCP agent | node1 | nova 
 | True  | UP| neutron-dhcp-agent|
#| 18654eb7-18f2-2412-4231-aa7827a38bce | Metadata agent | node1 | None 
 | True  | UP| neutron-metadata-agent|
#| 18654eb7-18f2-4129-beb5-aa708681fa5e | Metadata agent | node2 | None 
 | True  | UP| neutron-metadata-agent|
#| 2943f624-db65-4819-87af-761098983d6d | DHCP agent | node2 | nova 
 | True  | UP| neutron-dhcp-agent|
#| 6ed53842-d288-4745-b487-7915a9a1800c | Linux bridge agent | node1 | None 
 | True  | UP| neutron-linuxbridge-agent |
#| b9aa09ba-62ec-4e3c-a62d-6d8279014af4 | Linux bridge agent | node2 | None 
 | True  | UP| neutron-linuxbridge-agent |
#+--++---+---+---+---+---+

After rebooting both of nodes, I could create new instance normally? I
don't know why?

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: "nova-api.log"
   
https://bugs.launchpad.net/bugs/1670737/+attachment/4832997/+files/nova-api.log

** Description changed:

- Environment:
+ Environment: Newton Version on Ubuntu 16.04
  root@node1:~# dpkg -l | grep nova
  ii  nova-api2:14.0.2-0ubuntu1~cloud0  
all  OpenStack Compute - API frontend
  ii  nova-common 2:14.0.2-0ubuntu1~cloud0  
all  OpenStack Compute - common files
  ii  nova-compute2:14.0.2-0ubuntu1~cloud0  
all  OpenStack Compute - compute node base
  ii  nova-compute-kvm2:14.0.2-0ubuntu1~cloud0  
all  OpenStack Compute - comput

[Yahoo-eng-team] [Bug 1669949] Re: report mode can exit 1 which will disable cloud-init

2017-03-07 Thread Scott Moser
** Changed in: cloud-init
   Status: Confirmed => Fix Committed

** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1669949

Title:
  report mode can exit 1 which will disable cloud-init

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released

Bug description:
  Report mode could exit with 1 (disable) and the caller (cloud-init-
  generator) would then disable cloud-init.

  Thats not very "report only".

  The change needed is to just make report always exit with 0 (enable).

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1669949/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1587556] Re: Drag and drop of button in the field shows button-action

2017-03-07 Thread Rob Cresswell
This is not a bug. There is no negative impact on a user whatsoever.

** Changed in: horizon
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1587556

Title:
   Drag and drop of button in the field shows button-action

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  1. Log in to Horizon Dashboard.
  2. Go to Project/Compute/Instances.
  3. Take and drop the "Launch instance" button into the text field.

  Actual result: we can see button-action in this field
  Excepted result: empty text field

  This bug appears for all cases with field + button-action (not only
  Instances)

  Env: Devstack, master-branch 31 May 2016

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1587556/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1616654] Re: Compile Modals only when needed

2017-03-07 Thread Rob Cresswell
I don't agree with this. We should stick with compiling everything up
front; I'm not sure what gains this gives us, wouldn't it slow down
Horizon every time a new template was used?

** Changed in: horizon
 Assignee: Diana Whitten (hurgleburgler) => (unassigned)

** Changed in: horizon
   Status: In Progress => Invalid

** Changed in: horizon
Milestone: next => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1616654

Title:
  Compile Modals only when needed

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Right now, we compile ALL templates upfront.  We may want to only use
  alerts when necessary, and so we should only compile each one as
  needed and cache its value for use later.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1616654/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1670642] [NEW] get_model method missing for Ploop image

2017-03-07 Thread Evgeny Antyshev
Public bug reported:

This results in failures in inject_data from nova/virt/disk/api.py
even if no real data is injected:

2017-03-03 15:11:55.707 48683 ERROR nova.virt.libvirt.driver 
[req-f6f3df11-35b0-44c2-a1c4-52548504a1b5 fd33deabe809466a8dcd517bdf882e6b 
e4046b10f1074f06bd5d474ddfce09bc - - -] [instance: 
99aaeae0-e611-424e-bbd6-8d83ef37caf5] Error injecting data into image 
082f0243-cd99-4312-a27a-86b8cd3688d2 ()
2017-03-03 15:11:55.708 48683 ERROR nova.compute.manager 
[req-f6f3df11-35b0-44c2-a1c4-52548504a1b5 fd33deabe809466a8dcd517bdf882e6b 
e4046b10f1074f06bd5d474ddfce09bc - - -] [instance: 
99aaeae0-e611-424e-bbd6-8d83ef37caf5] Instance failed to spawn
2017-03-03 15:11:55.708 48683 ERROR nova.compute.manager [instance: 
99aaeae0-e611-424e-bbd6-8d83ef37caf5] Traceback (most recent call last):
2017-03-03 15:11:55.708 48683 ERROR nova.compute.manager [instance: 
99aaeae0-e611-424e-bbd6-8d83ef37caf5]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2078, in 
_build_resources
2017-03-03 15:11:55.708 48683 ERROR nova.compute.manager [instance: 
99aaeae0-e611-424e-bbd6-8d83ef37caf5] yield resources
2017-03-03 15:11:55.708 48683 ERROR nova.compute.manager [instance: 
99aaeae0-e611-424e-bbd6-8d83ef37caf5]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1920, in 
_build_and_run_instance
2017-03-03 15:11:55.708 48683 ERROR nova.compute.manager [instance: 
99aaeae0-e611-424e-bbd6-8d83ef37caf5] block_device_info=block_device_info)
2017-03-03 15:11:55.708 48683 ERROR nova.compute.manager [instance: 
99aaeae0-e611-424e-bbd6-8d83ef37caf5]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2624, in 
spawn
2017-03-03 15:11:55.708 48683 ERROR nova.compute.manager [instance: 
99aaeae0-e611-424e-bbd6-8d83ef37caf5] admin_pass=admin_password)
2017-03-03 15:11:55.708 48683 ERROR nova.compute.manager [instance: 
99aaeae0-e611-424e-bbd6-8d83ef37caf5]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3028, in 
_create_image
2017-03-03 15:11:55.708 48683 ERROR nova.compute.manager [instance: 
99aaeae0-e611-424e-bbd6-8d83ef37caf5] fallback_from_host)
2017-03-03 15:11:55.708 48683 ERROR nova.compute.manager [instance: 
99aaeae0-e611-424e-bbd6-8d83ef37caf5]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3137, in 
_create_and_inject_local_root
2017-03-03 15:11:55.708 48683 ERROR nova.compute.manager [instance: 
99aaeae0-e611-424e-bbd6-8d83ef37caf5] files)
2017-03-03 15:11:55.708 48683 ERROR nova.compute.manager [instance: 
99aaeae0-e611-424e-bbd6-8d83ef37caf5]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2977, in 
_inject_data
2017-03-03 15:11:55.708 48683 ERROR nova.compute.manager [instance: 
99aaeae0-e611-424e-bbd6-8d83ef37caf5] instance=instance)
2017-03-03 15:11:55.708 48683 ERROR nova.compute.manager [instance: 
99aaeae0-e611-424e-bbd6-8d83ef37caf5]   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-03-03 15:11:55.708 48683 ERROR nova.compute.manager [instance: 
99aaeae0-e611-424e-bbd6-8d83ef37caf5] self.force_reraise()
2017-03-03 15:11:55.708 48683 ERROR nova.compute.manager [instance: 
99aaeae0-e611-424e-bbd6-8d83ef37caf5]   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-03-03 15:11:55.708 48683 ERROR nova.compute.manager [instance: 
99aaeae0-e611-424e-bbd6-8d83ef37caf5] six.reraise(self.type_, self.value, 
self.tb)
2017-03-03 15:11:55.708 48683 ERROR nova.compute.manager [instance: 
99aaeae0-e611-424e-bbd6-8d83ef37caf5]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2968, in 
_inject_data
2017-03-03 15:11:55.708 48683 ERROR nova.compute.manager [instance: 
99aaeae0-e611-424e-bbd6-8d83ef37caf5] 
disk_api.inject_data(injection_image.get_model(self._conn),
2017-03-03 15:11:55.708 48683 ERROR nova.compute.manager [instance: 
99aaeae0-e611-424e-bbd6-8d83ef37caf5]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 395, 
in get_model
2017-03-03 15:11:55.708 48683 ERROR nova.compute.manager [instance: 
99aaeae0-e611-424e-bbd6-8d83ef37caf5] raise NotImplementedError()
2017-03-03 15:11:55.708 48683 ERROR nova.compute.manager [instance: 
99aaeae0-e611-424e-bbd6-8d83ef37caf5] NotImplementedError
2017-03-03 15:11:55.708 48683 ERROR nova.compute.manager [instance: 
99aaeae0-e611-424e-bbd6-8d83ef37caf5]

libvirt.virt_type  = parallels
libvirt.images_type= ploop
libvirt.inject_partition   = -1

** Affects: nova
 Importance: Undecided
 Assignee: Evgeny Antyshev (eantyshev)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Evgeny Antyshev (eantyshev)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.ne

[Yahoo-eng-team] [Bug 1640757] Re: Parameter validation for string type parameters

2017-03-07 Thread Reedip
Letting it expire as it doesnt have a major value add

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1640757

Title:
  Parameter validation for string type parameters

Status in neutron:
  Invalid

Bug description:
  When I create a qos policy,  I  set name or description escape character( 
\n\t\r), 
  qos policy still be created successfully. 
  No parameter checking of string type parameters of API,such as name or 
description

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1640757/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1670628] [NEW] nova-compute will try to re-plug the vif even if it exists for vhostuser port.

2017-03-07 Thread wangyalei
Public bug reported:

Description
===
In mitaka version, deploy neutron with ovs-dpdk.
If we stop ovs-agent, then re-start the nova-compute,the vm in the host will 
get network connection failed.

Steps to reproduce
==
deploy mitaka. with neutron, enabled ovs-dpdk, choose one compute node, where 
vm has network connection.
run this in host,
1. #systemctl stop neutron-openvswitch-agent.service
2. #systemctl restart openstack-nova-compute.service

then ping $VM_IN_THIS_HOST

Expected result
===
ping $VM_IN_THIS_HOST would would success

Actual result
=
ping $VM_IN_THIS_HOST failed.

Environment
===
Centos7 
ovs2.5.1 
dpdk 2.2.0 
openstack-nova-compute-13.1.1-1 

Reason:
after some digging, I found that nova-compute will try to plug the vif every 
time when it booting.
Specially for vhostuser port, nova-compute will not check whether it exists as 
legacy ovs,and it will re-plug the port with vsctl args like "--if-exists 
del-port vhu".
(refer 
https://github.com/openstack/nova/blob/stable/mitaka/nova/virt/libvirt/vif.py#L679-L683)
after recreate the ovs vhostuser port, it will not get the right vlan tag which 
set from ovs agent.

In the test environment, after restart the ovs agent, the agent will set
a proper vlan id for the port. and the network connection will be
resumed.

Not sure it's a bug or config issue, do I miss something?

** Affects: nova
 Importance: Undecided
 Assignee: wangyalei (yalei)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => wangyalei (yalei)

** Summary changed:

- nova-compute will try to re-plug the vif even if it exists
+ nova-compute will try to re-plug the vif even if it exists for vhostuser port.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1670628

Title:
  nova-compute will try to re-plug the vif even if it exists for
  vhostuser port.

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  In mitaka version, deploy neutron with ovs-dpdk.
  If we stop ovs-agent, then re-start the nova-compute,the vm in the host will 
get network connection failed.

  Steps to reproduce
  ==
  deploy mitaka. with neutron, enabled ovs-dpdk, choose one compute node, where 
vm has network connection.
  run this in host,
  1. #systemctl stop neutron-openvswitch-agent.service
  2. #systemctl restart openstack-nova-compute.service

  then ping $VM_IN_THIS_HOST

  Expected result
  ===
  ping $VM_IN_THIS_HOST would would success

  Actual result
  =
  ping $VM_IN_THIS_HOST failed.

  Environment
  ===
  Centos7 
  ovs2.5.1 
  dpdk 2.2.0 
  openstack-nova-compute-13.1.1-1 

  Reason:
  after some digging, I found that nova-compute will try to plug the vif every 
time when it booting.
  Specially for vhostuser port, nova-compute will not check whether it exists 
as legacy ovs,and it will re-plug the port with vsctl args like "--if-exists 
del-port vhu".
  (refer 
https://github.com/openstack/nova/blob/stable/mitaka/nova/virt/libvirt/vif.py#L679-L683)
  after recreate the ovs vhostuser port, it will not get the right vlan tag 
which set from ovs agent.

  In the test environment, after restart the ovs agent, the agent will
  set a proper vlan id for the port. and the network connection will be
  resumed.

  Not sure it's a bug or config issue, do I miss something?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1670628/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1670627] [NEW] quota is always in-use after delete the ERROR instances

2017-03-07 Thread huangtianhua
Public bug reported:

1. stop nova-compute 
2. boot an instance 
3. the instance is in ERROR
4. delete the instance
5. repeat 1-4 for several times(actually, I create the instance by heat, heat 
will retry to create the instance 5 times by default and will delete the error 
instance before every retrying)
6. I can't boot instance after nova-compute back, the reason is "Quota exceeded 
for instances: Requested 1, but already used 10 of 10 instances"
7. but in fact, there is no instance return by cmd nova-list
8. I find that the quota is still in-use, see table 'quota_usages':

mysql> select *from quota_usages;
+-+-+++--+---++--+---+-+--+
| created_at  | updated_at  | deleted_at | id | project_id  
 | resource  | in_use | reserved | until_refresh | deleted | 
user_id  |
+-+-+++--+---++--+---+-+--+
| 2017-03-07 06:26:08 | 2017-03-07 08:48:09 | NULL   |  1 | 
2b623ba1dddc476cbb7728a944d539c5 | instances | 10 |0 |  
NULL |   0 | 8d57d7a267b54992b382a6607ecd700a |
| 2017-03-07 06:26:08 | 2017-03-07 08:48:09 | NULL   |  2 | 
2b623ba1dddc476cbb7728a944d539c5 | ram   |   5120 |0 |  
NULL |   0 | 8d57d7a267b54992b382a6607ecd700a |
| 2017-03-07 06:26:08 | 2017-03-07 08:48:09 | NULL   |  3 | 
2b623ba1dddc476cbb7728a944d539c5 | cores | 10 |0 |  
NULL |   0 | 8d57d7a267b54992b382a6607ecd700a |
| 2017-03-07 09:17:37 | 2017-03-07 09:35:14 | NULL   |  4 | 
12bdc74d666d4f7687c0172a003f190d | instances | 13 |0 |  
NULL |   0 | 98887477e65e43f383f8a9ec732a3eae |
| 2017-03-07 09:17:37 | 2017-03-07 09:35:14 | NULL   |  5 | 
12bdc74d666d4f7687c0172a003f190d | ram   |   6656 |0 |  
NULL |   0 | 98887477e65e43f383f8a9ec732a3eae |
| 2017-03-07 09:17:37 | 2017-03-07 09:35:14 | NULL   |  6 | 
12bdc74d666d4f7687c0172a003f190d | cores | 13 |0 |  
NULL |   0 | 98887477e65e43f383f8a9ec732a3eae |
+-+-+++--+---++--+---+-+--+

** Affects: nova
 Importance: Undecided
 Assignee: huangtianhua (huangtianhua)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => huangtianhua (huangtianhua)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1670627

Title:
  quota is always in-use after delete the ERROR instances

Status in OpenStack Compute (nova):
  New

Bug description:
  1. stop nova-compute 
  2. boot an instance 
  3. the instance is in ERROR
  4. delete the instance
  5. repeat 1-4 for several times(actually, I create the instance by heat, heat 
will retry to create the instance 5 times by default and will delete the error 
instance before every retrying)
  6. I can't boot instance after nova-compute back, the reason is "Quota 
exceeded for instances: Requested 1, but already used 10 of 10 instances"
  7. but in fact, there is no instance return by cmd nova-list
  8. I find that the quota is still in-use, see table 'quota_usages':

  mysql> select *from quota_usages;
  
+-+-+++--+---++--+---+-+--+
  | created_at  | updated_at  | deleted_at | id | project_id
   | resource  | in_use | reserved | until_refresh | deleted | 
user_id  |
  
+-+-+++--+---++--+---+-+--+
  | 2017-03-07 06:26:08 | 2017-03-07 08:48:09 | NULL   |  1 | 
2b623ba1dddc476cbb7728a944d539c5 | instances | 10 |0 |  
NULL |   0 | 8d57d7a267b54992b382a6607ecd700a |
  | 2017-03-07 06:26:08 | 2017-03-07 08:48:09 | NULL   |  2 | 
2b623ba1dddc476cbb7728a944d539c5 | ram   |   5120 |0 |  
NULL |   0 | 8d57d7a267b54992b382a6607ecd700a |
  | 2017-03-07 06:26:08 | 2017-03-07 08:48:09 | NULL   |  3 | 
2b623ba1dddc476cbb7728a944d539c5 | cores | 10 |0 |  
NULL |   0 | 8d57d7a267b54992b382a6607ecd700a |
  | 2017-03-07 09:17:37 | 2017-03-07 09:35:14 | NULL   |  4 | 
12bdc74d666d4f7687c0172a003f190d | instances | 13 |0 |  
NULL |   0 | 98

[Yahoo-eng-team] [Bug 1670625] [NEW] populate fdb entries before spawning vm

2017-03-07 Thread Perry
Public bug reported:

before spawning vm, neutron l2 agent has to wire up device on compute.
This is controlled by provisioning_block in neutron. However L2 agent on
compute takes much time to sync lots of fdb info during wire up device
and which causes VM takes long time to spawn and active. The fdb
synchronization could be done when spawning vm to improve the
performance of booting vm. This also delays next loop of daemon_loop to
get another new taps processed.

Steps to re-produce the problem:

L2 of linuxbridge and dedicated compute nodes

1) nova boot  --image  ubuntu-guest-image-14.04  --flavor m1.medium
--nic net-name=internal --security-groups unlocked --min-count 50 --max-
count 50 free_deletable

2)observe numerous log as below in neutron-linuxbridge-agent.log.

2017-03-07 05:01:43.220 25336 DEBUG neutron.agent.linux.utils [req-
534e9f59-0a66-4071-8c40-977f87a6be49 - - - - -] Running command:
['sudo', '/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf',
'bridge', 'fdb', 'append', '00:00:00:00:00:00', 'dev', 'vxlan-1', 'dst',
'10.160.152.141'] create_process /usr/lib/python2.7/site-
packages/neutron/agent/linux/utils.py:89

3)eventually, it takes lots of time to process new tap (6.2 seconds) in
neutron-linuxbridge-agent.log

2017-03-07 05:01:44.654 25336 DEBUG
neutron.plugins.ml2.drivers.agent._common_agent [req-835dd509-27b0-42e3
-903d-32577187d288 - - - - -] Loop iteration exceeded interval (2 vs.
6.2001709938)! daemon_loop /usr/lib/python2.7/site-
packages/neutron/plugins/ml2/drivers/agent/_common_agent.py:466

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  before spawning vm, neutron l2 agent has to wire up device on compute.
- This is controlled by provision_block in neutron. However L2 agent on
+ This is controlled by provisioning_block in neutron. However L2 agent on
  compute takes much time to sync lots of fdb info during wire up device
  and which causes VM takes long time to spawn and active. The fdb
  synchronization could be done when spawning vm to improve the
  performance of booting vm. This also delays next loop of daemon_loop to
  get another new taps processed.
- 
  
  Steps to re-produce the problem:
  
  L2 of linuxbridge and dedicated compute nodes
  
  1) nova boot  --image  ubuntu-guest-image-14.04-20161130-x86_64
  --flavor m1.medium --nic net-name=internal --security-groups unlocked
  --min-count 50 --max-count 50 free_deletable
  
  2)observe numerous log as below in neutron-linuxbridge-agent.log.
  
  2017-03-07 05:01:43.220 25336 DEBUG neutron.agent.linux.utils [req-
  534e9f59-0a66-4071-8c40-977f87a6be49 - - - - -] Running command:
  ['sudo', '/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf',
  'bridge', 'fdb', 'append', '00:00:00:00:00:00', 'dev', 'vxlan-1', 'dst',
  '10.160.152.141'] create_process /usr/lib/python2.7/site-
  packages/neutron/agent/linux/utils.py:89
  
  3)eventually, it takes lots of time to process new tap (6.2 seconds) in
  neutron-linuxbridge-agent.log
  
  2017-03-07 05:01:44.654 25336 DEBUG
  neutron.plugins.ml2.drivers.agent._common_agent [req-835dd509-27b0-42e3
  -903d-32577187d288 - - - - -] Loop iteration exceeded interval (2 vs.
  6.2001709938)! daemon_loop /usr/lib/python2.7/site-
  packages/neutron/plugins/ml2/drivers/agent/_common_agent.py:466

** Description changed:

  before spawning vm, neutron l2 agent has to wire up device on compute.
  This is controlled by provisioning_block in neutron. However L2 agent on
  compute takes much time to sync lots of fdb info during wire up device
  and which causes VM takes long time to spawn and active. The fdb
  synchronization could be done when spawning vm to improve the
  performance of booting vm. This also delays next loop of daemon_loop to
  get another new taps processed.
  
  Steps to re-produce the problem:
  
  L2 of linuxbridge and dedicated compute nodes
  
- 1) nova boot  --image  ubuntu-guest-image-14.04-20161130-x86_64
- --flavor m1.medium --nic net-name=internal --security-groups unlocked
- --min-count 50 --max-count 50 free_deletable
+ 1) nova boot  --image  ubuntu-guest-image-14.04  --flavor m1.medium
+ --nic net-name=internal --security-groups unlocked --min-count 50 --max-
+ count 50 free_deletable
  
  2)observe numerous log as below in neutron-linuxbridge-agent.log.
  
  2017-03-07 05:01:43.220 25336 DEBUG neutron.agent.linux.utils [req-
  534e9f59-0a66-4071-8c40-977f87a6be49 - - - - -] Running command:
  ['sudo', '/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf',
  'bridge', 'fdb', 'append', '00:00:00:00:00:00', 'dev', 'vxlan-1', 'dst',
  '10.160.152.141'] create_process /usr/lib/python2.7/site-
  packages/neutron/agent/linux/utils.py:89
  
  3)eventually, it takes lots of time to process new tap (6.2 seconds) in
  neutron-linuxbridge-agent.log
  
  2017-03-07 05:01:44.654 25336 DEBUG
  neutron.plugins.ml2.drivers.agent._common_agent [req-835dd509-27b0-42e3
  -903d-32577187d288 - - - - -] Loop iteration exceeded in

[Yahoo-eng-team] [Bug 1524916] Re: neutron-ns-metadata-proxy uses ~25MB/router in production

2017-03-07 Thread Jakub Libosvar
I added TripleO project here to not forget to update l3 and dhcp
rootwrap filters after upgrade to Pike. I hope it's the right project,
if there is a dedicated project that handles upgrades, please tell us :)

** Also affects: tripleo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1524916

Title:
  neutron-ns-metadata-proxy uses ~25MB/router in production

Status in neutron:
  In Progress
Status in tripleo:
  New

Bug description:
  [root@mac6cae8b61e442 memexplore]# ./memexplore.py all metadata-proxy | cut 
-c 1-67
  25778 kB  (pid 420) /usr/bin/python /bin/neutron-ns-metadata-proxy -
  25774 kB  (pid 1468) /usr/bin/python /bin/neutron-ns-metadata-proxy
  25778 kB  (pid 1472) /usr/bin/python /bin/neutron-ns-metadata-proxy
  25770 kB  (pid 1474) /usr/bin/python /bin/neutron-ns-metadata-proxy
  26528 kB  (pid 1489) /usr/bin/python /bin/neutron-ns-metadata-proxy
  25778 kB  (pid 1520) /usr/bin/python /bin/neutron-ns-metadata-proxy
  25778 kB  (pid 1738) /usr/bin/python /bin/neutron-ns-metadata-proxy
  25774 kB  (pid 1814) /usr/bin/python /bin/neutron-ns-metadata-proxy
  25774 kB  (pid 2024) /usr/bin/python /bin/neutron-ns-metadata-proxy
  25774 kB  (pid 3961) /usr/bin/python /bin/neutron-ns-metadata-proxy
  25774 kB  (pid 4076) /usr/bin/python /bin/neutron-ns-metadata-proxy
  25770 kB  (pid 4099) /usr/bin/python /bin/neutron-ns-metadata-proxy
  [...]
  25778 kB  (pid 31386) /usr/bin/python /bin/neutron-ns-metadata-proxy
  25778 kB  (pid 31403) /usr/bin/python /bin/neutron-ns-metadata-proxy
  25774 kB  (pid 31416) /usr/bin/python /bin/neutron-ns-metadata-proxy
  25778 kB  (pid 31453) /usr/bin/python /bin/neutron-ns-metadata-proxy
  25770 kB  (pid 31483) /usr/bin/python /bin/neutron-ns-metadata-proxy
  25770 kB  (pid 31647) /usr/bin/python /bin/neutron-ns-metadata-proxy
  25774 kB  (pid 31743) /usr/bin/python /bin/neutron-ns-metadata-proxy

  2,581,230 kB Total PSS

  if we look explicitly at one of those processes we see:

  # ./memexplore.py pss 24039
  0 kB  7f97db981000-7f97dbb81000 ---p 0005f000 fd:00 4298776438
 /usr/lib64/libpcre.so.1.2.0
  0 kB  7f97dbb83000-7f97dbba4000 r-xp  fd:00 4298776486
 /usr/lib64/libselinux.so.1
  0 kB  7fff16ffe000-7fff1700 r-xp  00:00 0 
 [vdso]
  0 kB  7f97dacb5000-7f97dacd1000 r-xp  fd:00 4298779123
 /usr/lib64/python2.7/lib-dynload/_io.so
  0 kB  7f97d6a06000-7f97d6c05000 ---p 000b1000 fd:00 4298777149
 /usr/lib64/libsqlite3.so.0.8.6
  [...]
  0 kB  7f97d813a000-7f97d8339000 ---p b000 fd:00 4298779157
 /usr/lib64/python2.7/lib-dynload/pyexpat.so
  0 kB  7f97dbba4000-7f97dbda4000 ---p 00021000 fd:00 4298776486
 /usr/lib64/libselinux.so.1
  0 kB  7f9
  7db4f7000-7f97db4fb000 r-xp  fd:00 4298779139 
/usr/lib64/python2.7/lib-dynload/cStringIO.so
  0 kB  7f97dc81e000-7f97dc81f000 rw-p  00:00 0
  0 kB  7f97d8545000-7f97d8557000 r-xp  fd:00 4298779138
 /usr/lib64/python2.7/lib-dynload/cPickle.so
  0 kB  7f97d9fd3000-7f97d9fd7000 r-xp  fd:00 4298779165
 /usr/lib64/python2.7/lib-dynload/timemodule.so
  0 kB  7f97d99c4000-7f97d9bc3000 ---p 2000 fd:00 4298779147
 /usr/lib64/python2.7/lib-dynload/grpmodule.so
  0 kB  7f97daedb000-7f97daede000 r-xp  fd:00 4298779121
 /usr/lib64/python2.7/lib-dynload/_heapq.so
  0 kB  7f97ddfd4000-7f97ddfd7000 r-xp  fd:00 4298779119
 /usr/lib64/python2.7/lib-dynload/_functoolsmodule.so
  0 kB  7f97d8b67000-7f97d8b78000 r-xp  fd:00 4298779141
 /usr/lib64/python2.7/lib-dynload/datetime.so
  0 kB  7f97d7631000-7f97d7635000 r-xp  fd:00 4298776496
 /usr/lib64/libuuid.so.1.3.0
  0 kB  7f97dd59e000-7f97dd5a6000 r-xp  fd:00 4298779132
 /usr/lib64/python2.7/lib-dynload/_ssl.so
  0 kB  7f97dbfc-7f97dbfc2000 rw-p  00:00 0
  0 kB  7f97dd332000-7f97dd394000 r-xp  fd:00 4298776137
 /usr/lib64/libssl.so.1.0.1e
  0 kB  7f97d6e22000-7f97d7021000 ---p 4000 fd:00 6442649369
 /usr/lib64/python2.7/site-packages/sqlalchemy/cresultproxy.so
  0 kB  7f97d95bb000-7f97d97ba000 ---p b000 fd:00 4298779156
 /usr/lib64/python2.7/lib-dynload/parsermodule.so
  0 kB  7f97da3dd000-7f97da3e r-xp  fd:00 4298779129
 /usr/lib64/python2.7/lib-dynload/_randommodule.so
  0 kB  7f97dddcf000-7f973000 r-xp  fd:00 4298779125
 /usr/lib64/python2.7/lib-dynload/_localemodule.so
  0 kB  7f97da7e5000-7f97da7ea000 r-xp  fd:00 4298779136
 /usr/lib64/python2.7/l

[Yahoo-eng-team] [Bug 1619156] Re: DerivedStats - add support for Time Range based SUM/AVG

2017-03-07 Thread Anish Mehta
** Changed in: horizon
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1619156

Title:
  DerivedStats - add support for Time Range based  SUM/AVG

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in Juniper Openstack:
  Fix Committed
Status in Juniper Openstack r3.1 series:
  Fix Committed
Status in Juniper Openstack trunk series:
  Fix Committed

Bug description:
  DerivedStats SUM and AVG should have a time range specified in seconds.
  Right now, we allow range in terms of number of samples instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1619156/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667086] Re: XSS in federation mappings UI

2017-03-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/442277
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=a835dbfbaa2c70329c08d4b8429d49315dc6d651
Submitter: Jenkins
Branch:master

commit a835dbfbaa2c70329c08d4b8429d49315dc6d651
Author: Richard Jones 
Date:   Tue Mar 7 16:55:39 2017 +1100

Remove dangerous safestring declaration

This declaration allows XSS content through the JSON and
is unnecessary for correct rendering of the content anyway.

Change-Id: I82355b37108609ae573237424e528aab86a24efc
Closes-Bug: 1667086


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1667086

Title:
  XSS in federation mappings UI

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Security Advisory:
  Incomplete

Bug description:
  Found in Mitaka

  Steps:
  - Setup federation in keystone and horizon
  - Launch and login to horizon as an admin
  - Click on the Federation->Mappings tab
  - Create or update a mapping with the following content that contains 
javascript

  [
  {
  "local": [
  {
  "domain": {
  "name": "Default"
  },
  "group": {
  "domain": {
  "name": "Default"
  },
  "name": "Federated Users"
  },
  "user": {
  "name": "{alert('test');}",
  "email": "{1}"
  },
  "groups": "{2}"
  }
  ],
  "remote": [
  {
  "type": "REMOTE_USER"
  },
  {
  "type": "MELLON_userEmail"
  },
  {
  "type": "MELLON_groups"
  }
  ]
  }
  ]

  Now whenever this Federation->Mapping page is shown, the javascript
  will execute.

  It appears other pages in horizon protect against such attacks (such
  as Users, Groups, etc).  So I'm guessing that the rendering of this
  page just needs to be escaped to ignore tags.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1667086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp