[Yahoo-eng-team] [Bug 1681658] [NEW] disk_allocation_ratio does not work with placement API

2017-04-10 Thread nirendra
Public bug reported:

disk_allocation_ratio is not working with Ocata since we are checking if 
requested amount is less than max_unit.
Because of this if I have a compute with 64GB local disk and I'm using iSCSI 
storage then even after setting disk_allocation_ratio to .0 I can not 
create a VM with 80GB as root disk.

Following code segment in objects/resource_provider.py is causing this to fail:
_INV_TBL.c.resource_class_id == r_idx,
(func.coalesce(usage.c.used, 0) + amount <= (
_INV_TBL.c.total - _INV_TBL.c.reserved
) * _INV_TBL.c.allocation_ratio),
_INV_TBL.c.min_unit <= amount,
_INV_TBL.c.max_unit >= amount,

Environment: 
# openstack --version
openstack 3.8.1

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1681658

Title:
  disk_allocation_ratio does not work with placement API

Status in OpenStack Compute (nova):
  New

Bug description:
  disk_allocation_ratio is not working with Ocata since we are checking if 
requested amount is less than max_unit.
  Because of this if I have a compute with 64GB local disk and I'm using iSCSI 
storage then even after setting disk_allocation_ratio to .0 I can not 
create a VM with 80GB as root disk.

  Following code segment in objects/resource_provider.py is causing this to 
fail:
  _INV_TBL.c.resource_class_id == r_idx,
  (func.coalesce(usage.c.used, 0) + amount <= (
  _INV_TBL.c.total - _INV_TBL.c.reserved
  ) * _INV_TBL.c.allocation_ratio),
  _INV_TBL.c.min_unit <= amount,
  _INV_TBL.c.max_unit >= amount,

  Environment: 
  # openstack --version
  openstack 3.8.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1681658/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1644231] Re: fip router config is not created if the vm ports attached to FIPs have no device_owner

2017-04-10 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1644231

Title:
  fip router config is not created if the vm ports attached to FIPs have
  no device_owner

Status in neutron:
  Expired

Bug description:
  With dvr_snat or dvr mode, if you create a port like described, and then 
attach it to
  a netns in any of the computes or dvr_snat node, the _floatingips key is not 
set
  by neutron-server on a sync_routers call from l3-agent.

  This leads to the FIP namespace not being updated for the specific floating 
ip, or
  not even being created.

  
  We either document that a valid device_owner is necessary[1] for a 
floating-ip in DVR, 
  or we accept an empty device owner.

  I believe we should accept an empty device_owner to don't differ from
  the non-DVR implementation.


  
  Script to reproduce:

  neutron net-create dmz

  ID_DMZ=$(neutron subnet-create dmz --name dmz_subnet 172.16.255.128/26 | awk 
'/ id / { print  $4 }')
  neutron port-create --name dmz-vm1dmz--fixed-ip 
subnet_id=$ID_DMZ,ip_address=172.16.255.130 --binding:host_id=$(hostname)

  
  ID_DMZ_NET=$(neutron net-show dmz | awk ' / id / { print $4 }')

  DMZ_VM1_MAC=$(neutron port-show dmz-vm1 | awk ' / mac_address / { print $4 } 
')
  DMZ_VM1_ID=$(neutron port-show dmz-vm1 | awk ' / id / { print $4 } ')

  sudo ip netns add vm1
  sudo ovs-vsctl -- --may-exist add-port br-int vm1 \
 -- set Interface vm1 type=internal  \
 external_ids:attached-mac=$DMZ_VM1_MAC \
 external_ids:iface-id=$DMZ_VM1_ID \
 external_ids:vm-id=vm-$DMZ_VM1_ID \
 external_ids:iface-status=active external_ids:owner=admin 

  sudo ip link set vm1 address $DMZ_VM1_MAC

  sudo ip link set vm1 netns vm1
  sudo ip netns exec vm1 ip link set dev vm1 up
  sudo ip netns exec vm1 dhclient -I vm1 --no-pid vm1
  sudo ip netns exec vm1 ip addr show
  sudo ip netns exec vm1 ip route show


  
  neutron router-create router_dmz
  neutron router-gateway-set router_dmz public
  neutron router-interface-add router_dmz dmz_subnet


  FIP_ID=$(neutron floatingip-create public | awk '/ id / { print $4 }')
  FIP_IP=$(neutron floatingip-show $FIP_ID  | awk '/ floating_ip_address / { 
print $4 }')
  neutron floatingip-associate $FIP_ID $DMZ_VM1_ID 


  [1]
  
https://github.com/openstack/neutron/blob/f2235b7994b22d3e4be72185b86ba5723352f4b0/neutron/common/utils.py#L227

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1644231/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1636380] Re: [stable/newton] LBaaS statistics cli doesn't show updated values.

2017-04-10 Thread He Qing
this should be fixed  by https://review.openstack.org/#/c/449573/

** Changed in: neutron
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1636380

Title:
  [stable/newton] LBaaS statistics cli doesn't show updated values.

Status in neutron:
  Fix Released

Bug description:
  I have enabled Octavia in my stable/newton setup and verifying LBaaS
  services.

  The cli command "neutron lbaas-loadbalancer-stats" is not showing the updated 
values.All the 
  statistics field's values are zero.

  vmware@cntr1:/opt/stack/logs$ neutron lbaas-loadbalancer-stats 
663350ad-d515-423b-add1-11b38b14356b
  ++---+
  | Field  | Value |
  ++---+
  | active_connections | 0 |
  | bytes_in   | 0 |
  | bytes_out  | 0 |
  | total_connections  | 0 |
  ++---+

  However, stats collected using Octavia API is having updated values.

  vmware@cntr1:/opt/stack/logs$ curl -H 
"X-Auth-Token:008e1258cc294f9eb3a99df7b2da3e88" 
http://localhost:9876/v1/loadbalancers/663350ad-d515-423b-add1-11b38b14356b/listeners/a7743e61-a507-4300-81ad-3da3c9046511/stats
  {"bytes_in": 89520, "total_connections": 1119, "active_connections": 0, 
"bytes_out": 402840}

  Loadbalancer Info:

  vmware@cntr1:/opt/stack/logs$ neutron lbaas-loadbalancer-show 
663350ad-d515-423b-add1-11b38b14356b
  +-++
  | Field   | Value  |
  +-++
  | admin_state_up  | True   |
  | description ||
  | id  | 663350ad-d515-423b-add1-11b38b14356b   |
  | listeners   | {"id": "a7743e61-a507-4300-81ad-3da3c9046511"} |
  | name| test-lb|
  | operating_status| ONLINE |
  | pools   | {"id": "3a5e4090-fb2c-49e6-b209-ac5c6fb6fd21"} |
  | provider| octavia|
  | provisioning_status | PENDING_UPDATE |
  | tenant_id   | c9b2dc6e087e4aa9913701f48d520914   |
  | vip_address | 50.0.0.11  |
  | vip_port_id | 540c4137-ab17-42d1-9e21-a82793607772   |
  | vip_subnet_id   | ae311056-b856-464d-a706-bd26f33917a8   |
  +-++

  vmware@cntr1:/opt/stack/logs$ neutron lbaas-listener-list
  
+--+--+--+--+---++
  | id   | default_pool_id  
| name | protocol | protocol_port | admin_state_up |
  
+--+--+--+--+---++
  | a7743e61-a507-4300-81ad-3da3c9046511 | 3a5e4090-fb2c-49e6-b209-ac5c6fb6fd21 
| test-lb-http | TCP  |  8000 | True   |
  
+--+--+--+--+---++

  Please fix it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1636380/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499869] Re: maas wily deployment to HP Proliant m400 arm64 server cartridge fails

2017-04-10 Thread Joshua Powers
** Changed in: cloud-init
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1499869

Title:
  maas wily deployment to HP Proliant m400 arm64 server cartridge fails

Status in cloud-init:
  Fix Released
Status in curtin:
  Invalid
Status in cloud-init package in Ubuntu:
  Fix Released
Status in linux package in Ubuntu:
  Fix Released
Status in linux source package in Vivid:
  Fix Released
Status in cloud-init source package in Wily:
  Fix Released
Status in linux source package in Wily:
  Fix Released

Bug description:
  This is the error seen on the console:

  [   64.149080] cloud-init[834]: 2015-08-27 15:03:29,289 - util.py[WARNING]: 
Failed fetching metadata from url http://10.229.32.21/MAAS/metadata/curtin
  [  124.513212] cloud-init[834]: 2015-09-24 17:23:10,006 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed 
[2427570/120s]: request error [HTTPConnectionPool(host='169.254.169.254', 
port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id 
(Caused by 
ConnectTimeoutError(, 'Connection to 169.254.169.254 timed out. (connect 
timeout=50.0)'))]
  [  124.515570] cloud-init[834]: 2015-09-24 17:23:10,007 - 
DataSourceEc2.py[CRITICAL]: Giving up on md from 
['http://169.254.169.25/2009-04-04/meta-data/instance-id'] after 2427570 seconds
  [  124.531624] cloud-init[834]: 2015-09-24 17:23:10,024 - 
url_helper.py[WARNING]: Calling 'http:///latest/meta-data/instance-id' failed [0/120s]: bad status code [404]

  This times out eventually and the node is left at the login prompt. I
  can install wily via netboot without issue and some time back, wily
  was deployable to this node from MAAS.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1499869/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1681581] [NEW] cloud-init bug using GCE

2017-04-10 Thread Sherwin James
Public bug reported:

A new feature in cloud-init identified possible datasources for#
# this system as:#
#   ['None'] #
# However, the datasource used was: GCE

** Affects: cloud-init
 Importance: Undecided
 Status: New


** Tags: dsid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1681581

Title:
  cloud-init bug using GCE

Status in cloud-init:
  New

Bug description:
  A new feature in cloud-init identified possible datasources for#
  # this system as:#
  #   ['None'] #
  # However, the datasource used was: GCE

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1681581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1681551] [NEW] External LDAP integration overrides Keystone Version

2017-04-10 Thread Fatih Nar
Public bug reported:

After enabling LDAP there is a new local (Non LDAP/AD) domain created called 
admin_domain where there is a new admin user created.  The old admin user in 
the “default” local openstack domain no longer exists. 
As a result attempting to use v2 authentication (instead of v3), using what is 
in the keystone v2 rc file you can download through horizon, no longer works 
because the rc file specifies that old user in “default” local openstack domain.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1681551

Title:
  External LDAP integration overrides Keystone Version

Status in OpenStack Identity (keystone):
  New

Bug description:
  After enabling LDAP there is a new local (Non LDAP/AD) domain created called 
admin_domain where there is a new admin user created.  The old admin user in 
the “default” local openstack domain no longer exists. 
  As a result attempting to use v2 authentication (instead of v3), using what 
is in the keystone v2 rc file you can download through horizon, no longer works 
because the rc file specifies that old user in “default” local openstack domain.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1681551/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528349] Re: Nova and Glance contain a near-identical signature_utils module

2017-04-10 Thread Matt Riedemann
** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1528349

Title:
  Nova and Glance contain a near-identical signature_utils module

Status in Glance:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  It appears that https://review.openstack.org/256069 took the
  signature_utils modules from Glance and modified it in fairly
  superficial ways based on review feedback:

$ diff -u nova/nova/signature_utils.py 
glance/glance/common/signature_utils.py  | diffstat
signature_utils.py |  182 
-
1 file changed, 83 insertions(+), 99 deletions(-)

  The Oslo project was created to avoid this sort of short-sighted cut-
  and-pasting. This code should really be in a python library that both
  Glance and Nova could use directly.

  Perhaps the code could be moved to a new library in the Glance
  project, or a new library in the Oslo project, or into the
  cryptography library itself?

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1528349/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1681531] [NEW] networking.service fails on ifup if networking configured via cloud-init

2017-04-10 Thread Ben Howard
Public bug reported:

When cloud-init configures networking, it uses `ifup`. However, after
cloud-init is done, networking.service runs and attempts an `ifup -a
--read-environment` it fails since the devices are already configured
with a "RTNETLINK answers: File exists" (return code of 1).

The end result is that any service with a dependency of networking fails
to start.

--
>From /var/log/cloud-init.log:

2017-04-10 17:36:11,608 - util.py[DEBUG]: Running command ['ip', 'link', 'set', 
'ens3', 'down'] with allowed return codes [0] (shell=False, capture=True)
2017-04-10 17:36:11,615 - util.py[DEBUG]: Running command ['ip', 'link', 'set', 
'ens3', 'name', 'eth0'] with allowed return codes [0] (shell=False, 
capture=True)
2017-04-10 17:36:11,635 - util.py[DEBUG]: Running command ['ip', 'link', 'set', 
'ens4', 'name', 'eth1'] with allowed return codes [0] (shell=False, 
capture=True)
2017-04-10 17:36:11,651 - util.py[DEBUG]: Running command ['ip', 'link', 'set', 
'eth0', 'up'] with allowed return codes [0] (shell=False, capture=True)
2017-04-10 17:36:11,654 - stages.py[INFO]: Applying network configuration from 
ds bringup=False: {'version': 1, 'config': [{'name': 'eth0', 'subnets': 
[{'address': '138.197.88.85', 'netmask': '255.255.240.0', 'gateway': 
'138.197.80.1', 'type': 'static', 'control': 'auto'}, {'address': 
'2604:A880:0800:0010:::2ECE:D001/64', 'gateway': 
'2604:A880:0800:0010::::0001', 'type': 'static', 'control': 
'auto'}, {'address': '10.17.0.10', 'netmask': '255.255.0.0', 'type': 'static', 
'control': 'auto'}], 'mac_address': 'ee:90:f2:c6:dc:db', 'type': 'physical'}, 
{'name': 'eth1', 'subnets': [{'address': '10.132.92.131', 'netmask': 
'255.255.0.0', 'gateway': '10.132.0.1', 'type': 'static', 'control': 'auto'}], 
'mac_address': '1a:b6:7c:24:5e:cd', 'type': 'physical'}, {'address': 
['2001:4860:4860::8844', '2001:4860:4860::', '8.8.8.8'], 'type': 
'nameserver'}]}
2017-04-10 17:36:11,668 - util.py[DEBUG]: Writing to 
/etc/network/interfaces.d/50-cloud-init.cfg - wb: [420] 868 bytes
2017-04-10 17:36:11,669 - main.py[DEBUG]: [local] Exiting. datasource 
DataSourceDigitalOcean not in local mode.
2017-04-10 17:36:11,674 - util.py[DEBUG]: Reading from /proc/uptime 
(quiet=False)

--
>From 'dmesg':
Apr 10 17:36:11 ubuntu systemd[1]: Started Initial cloud-init job 
(pre-networking).
Apr 10 17:36:12 ubuntu systemd[1]: Started LSB: AppArmor initialization.
Apr 10 17:36:12 ubuntu systemd[1]: Reached target Network (Pre).
Apr 10 17:36:12 ubuntu systemd[1]: Starting Raise network interfaces...
Apr 10 17:36:13 ubuntu ifup[1099]: Waiting for DAD... Done
Apr 10 17:36:13 ubuntu ifup[1099]: RTNETLINK answers: File exists
Apr 10 17:36:13 ubuntu ifup[1099]: Failed to bring up eth1.

--
$ sudo journalctl -xe -u networking
Apr 10 17:36:12 ubuntu systemd[1]: Starting Raise network interfaces...
-- Subject: Unit networking.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit networking.service has begun starting up.
Apr 10 17:36:13 ubuntu ifup[1099]: Waiting for DAD... Done
Apr 10 17:36:13 ubuntu ifup[1099]: RTNETLINK answers: File exists
Apr 10 17:36:13 ubuntu ifup[1099]: Failed to bring up eth1.
Apr 10 17:36:13 ubuntu systemd[1]: networking.service: Main process exited, 
code=exited, status=1/FAILURE
Apr 10 17:36:13 ubuntu systemd[1]: Failed to start Raise network interfaces.
-- Subject: Unit networking.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit networking.service has failed.
--

** Affects: cloud-init (Ubuntu)
 Importance: Undecided
 Status: New

** Project changed: cloud-init => cloud-init (Ubuntu)

** Summary changed:

- networking.service fails on ifup due to new rendering of 50-cloud-init.cfg
+ networking.service fails on ifup if networking configured via cloud-init

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1681531

Title:
  networking.service fails on ifup if networking configured via cloud-
  init

Status in cloud-init package in Ubuntu:
  New

Bug description:
  When cloud-init configures networking, it uses `ifup`. However, after
  cloud-init is done, networking.service runs and attempts an `ifup -a
  --read-environment` it fails since the devices are already configured
  with a "RTNETLINK answers: File exists" (return code of 1).

  The end result is that any service with a dependency of networking
  fails to start.

  
--
  From /v

[Yahoo-eng-team] [Bug 1657260] Re: Established connection don't stops when rule is removed

2017-04-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/441353
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=c76164c058a0cfeee3eb46b523a9ad012f93dd51
Submitter: Jenkins
Branch:master

commit c76164c058a0cfeee3eb46b523a9ad012f93dd51
Author: Kevin Benton 
Date:   Fri Mar 3 11:18:28 2017 -0800

Move conntrack zones to IPTablesFirewall

The regular IPTablesFirewall needs zones to support safely
clearly conntrack entries.

In order to support the single bridge use case, the conntrack
manager had to be refactored slightly to allow zones to be
either unique to ports or unique to networks.

Since all ports in a network share a bridge in the IPTablesDriver
use case, a zone per port cannot be used since there is no way
to distinguish which zone traffic should be checked against when
traffic enters the bridge from outside the system.

A zone per network is adequate for the single bridge per network
solution since it implicitly does not suffer from the double-bridge
cross in a single network that led to per port usage in OVS.[1]

This had to adjust the functional firewall tests to use the correct
bridge name now that it's relevant in the non hybrid IPTables case.

1. Ibe9e49653b2a280ea72cb95c2da64cd94c7739da

Closes-Bug: #1668958
Closes-Bug: #1657260
Change-Id: Ie88237d3fe4807b712a7ec61eb932748c38952cc


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1657260

Title:
  Established connection don't stops when rule is removed

Status in neutron:
  Fix Released

Bug description:
  If iptables driver is used for Security groups (e.g. in Linuxbridge L2 agent) 
there is an issue with update rules. When You have rule which allows some kind 
of traffic (like ssh for example from some src IP address) and You have 
established, active connection which match this rule, connection will be still 
active even if rule will be removed/changed.
  It is because in iptables in chain for each SG as first there is rule to 
accept packets with "state RELATED,ESTABLISHED".
  I'm not sure if it is in fact bug or maybe it's just design decision to have 
better performance of iptables.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1657260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1681073] Re: Create Consistency Group form has an exception

2017-04-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/454990
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=89bb9268204a2316fc526d660f38d5517980f209
Submitter: Jenkins
Branch:master

commit 89bb9268204a2316fc526d660f38d5517980f209
Author: wei.ying 
Date:   Sat Apr 8 21:41:24 2017 +0800

Fix create consistency group form exception

Volume type extra specs may not contain ‘volume_backend_name’,
it should be judged before getting it.

Change-Id: I5dbc0636ba1c949df569acbbfc8a0879f7a76992
Closes-Bug: #1681073


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1681073

Title:
  Create Consistency Group form has an exception

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Env: devstack master branch

  Steps to reproduce:
  1. Go to admin/volume types panel
  2. Create volume type with any name
  3. Go to project/Consistency Groups panel
  4. Create Consistency Group and add the volume type we just created
  5. Submit Create Consistency Group form

  it will throws an exception.

  Exception info:
  Internal Server Error: /project/cgroups/create/
  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py", 
line 132, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/opt/stack/horizon/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
File "/opt/stack/horizon/horizon/decorators.py", line 52, in dec
  return view_func(request, *args, **kwargs)
File "/opt/stack/horizon/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
File "/opt/stack/horizon/horizon/decorators.py", line 84, in dec
  return view_func(request, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/django/views/generic/base.py", 
line 71, in view
  return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/django/views/generic/base.py", 
line 89, in dispatch
  return handler(request, *args, **kwargs)
File "/opt/stack/horizon/horizon/workflows/views.py", line 199, in post
  exceptions.handle(request)
File "/opt/stack/horizon/horizon/exceptions.py", line 352, in handle
  six.reraise(exc_type, exc_value, exc_traceback)
File "/opt/stack/horizon/horizon/workflows/views.py", line 194, in post
  success = workflow.finalize()
File "/opt/stack/horizon/horizon/workflows/base.py", line 824, in finalize
  if not self.handle(self.request, self.context):
File 
"/opt/stack/horizon/openstack_dashboard/dashboards/project/cgroups/workflows.py",
 line 323, in handle
  vol_type.extra_specs['volume_backend_name']
  KeyError: 'volume_backend_name'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1681073/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1681507] [NEW] horizon log message does not include logging level by default

2017-04-10 Thread Akihiro Motoki
Public bug reported:

In the default openstack_dashboard logging configuration, logging level
is not included. It is useful if logging level is included in log
message by default.

Note that we can add logging level to log message by customizing django
logging config.

** Affects: horizon
 Importance: Wishlist
 Assignee: Akihiro Motoki (amotoki)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1681507

Title:
  horizon log message does not include logging level by default

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  In the default openstack_dashboard logging configuration, logging
  level is not included. It is useful if logging level is included in
  log message by default.

  Note that we can add logging level to log message by customizing
  django logging config.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1681507/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1674156] Re: neutron-lbaasv2-agent: TypeError: argument of type 'LoadBalancer' is not iterable

2017-04-10 Thread Akihiro Motoki
neutron-lbaas is now a separate project "octavia". octavia is in charge
of neutron-lbaas project as well.

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1674156

Title:
  neutron-lbaasv2-agent: TypeError: argument of type 'LoadBalancer' is
  not iterable

Status in octavia:
  New
Status in neutron-lbaas package in Ubuntu:
  Triaged

Bug description:
  Is somebody actually running neutron LBaaSv2 with haproxy on Ubuntu
  16.04?

  root@controller1:~# dpkg -l neutron-lbaasv2-agent
  ii  neutron-lbaasv2-agent   2:8.3.0-0ubuntu1  
   all  Neutron is a virtual network service for 
Openstack - LBaaSv2 agent
  root@controller1:~# lsb_release -a
  No LSB modules are available.
  Distributor ID: Ubuntu
  Description:Ubuntu 16.04.2 LTS
  Release:16.04
  Codename:   xenial
  root@controller1:~# 

  
  From /var/log/neutron/neutron-lbaasv2-agent.log:

  2017-03-19 20:39:06.694 4528 INFO neutron.common.config [-] Logging enabled!
  2017-03-19 20:39:06.694 4528 INFO neutron.common.config [-] 
/usr/bin/neutron-lbaasv2-agent version 8.3.0
  2017-03-19 20:39:06.702 4528 WARNING oslo_config.cfg 
[req-9a6a669c-5a5a-4b6c-8c2f-2b1edd9462d9 - - - - -] Option 
"default_ipv6_subnet_pool" from group "DEFAULT" is deprecated for removal.  Its 
value may be silently ignored in the future.
  2017-03-19 20:39:07.033 4528 ERROR 
neutron_lbaas.services.loadbalancer.drivers.haproxy.namespace_driver [-] 

  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager [-] 
Unable to deploy instance for loadbalancer: c49473a7-b956-4a5d-8215-703335eb3320
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager 
Traceback (most recent call last):
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron_lbaas/agent/agent_manager.py", line 
185, in _reload_loadbalancer
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager 
self.device_drivers[driver_name].deploy_instance(loadbalancer)
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 274, in 
inner
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager 
return f(*args, **kwargs)
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 332, in deploy_instance
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager if 
not logical_config or not self._is_active(logical_config):
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 310, in _is_active
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager if 
('vip' not in logical_config or
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager 
TypeError: argument of type 'LoadBalancer' is not iterable
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager 

  /etc/neutron/neutron_lbaas.conf:
  [service_providers]
  
service_provider=LOADBALANCER:Haproxy:neutron_lbaas.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
  service_provider = 
LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

  Looking at the code, I don't see how this can actually works.

  /usr/lib/python2.7/dist-
  
packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py
  +310

  def _is_active(self, logical_config):
  LOG.error(logical_config)
  # haproxy wil be unable to start without any active vip
  ==> if ('vip' not in logical_config or
  (logical_config['vip']['status'] not in
   constants.ACTIVE_PENDING_STATUSES) or
  not logical_config['vip']['admin_state_up']):
  return False

  
  
/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/data_models.py:

  class LoadBalancer(BaseDataModel):

  fields = ['id', 'tenant_id', 'name', 'description', 'vip_subnet_id',
'vip_port_id', 'vip_address', 'provisioning_status',
'operating_status', 'admin_state_up', 'vip_port', 'stats',
'provider', 'listeners', 'pools', 'flavor_id']

  def __init__(self, id=None, tenant_id=None, name=None, description=None,
   vip_subnet_id=None, vip_port_id=None, vip_address=None,
   provisioning_status=None, operating_status=None,
   admin_state_up=None, vip_port=None, stats=None,
   provider=None, lis

[Yahoo-eng-team] [Bug 1674392] Re: placement api requires content-type on put and post even when no body

2017-04-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/447625
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=6dd047a3307a1056077608fd5bc2d1c3b3285338
Submitter: Jenkins
Branch:master

commit 6dd047a3307a1056077608fd5bc2d1c3b3285338
Author: Chris Dent 
Date:   Mon Mar 20 17:46:27 2017 +

[placement] Allow PUT and POST without bodies

We plan to allow PUT requests to create/update both custom traits
and custom resource classes, without bodies. Prior to this change,
the code would not all a PUT, POST or PATCH to not have a body. This was
added in I6e7dffb5dc5f0cdc78a57e8df3ae9952c55163ae which was fixing an
issue with how webob handles exceptions.

This change does two things:

* It address the problem from bug #1623517, fixed in the change id'd
  above, in a more narrow fashion, making sure the data source that
  causes the KeyError is non-empty right before it is used. This allows
  simplifying the following change.
* If a request has a content-length (indicating the presence of a body),
  verify that there is also a content-type. If not, raise a 400.

basic-http.yaml has been change to modify one gabbi test to check a
response body is correct and to add another test to confirm that the
code that is doing the content-length check is passed through.

Change-Id: Ifb7446fd02ba3e54bbe2676dfd38e5dfecd15f98
Closes-Bug: #1674392
Related-Bug: #1623517


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1674392

Title:
  placement api requires content-type on put and post even when no body

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The placement API a guarding condition which checks the request
  method. If it is PUT, POST or PATCH the presence of a content-type
  header is required.

  This is too strict (but happened to work fine for the api at the
  time). It is reasonable and okay to make a PUT or POST without a body,
  and thus without a content-type, and now we want to do such things
  within the placement API (with putting custom traits and resource
  classes).

  The fix is to only raise the 400 when content-length is set and non-
  zero. In that case a missing content-type is a bug, irrespective of
  method.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1674392/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1681391] Re: network and router persists when deleting project

2017-04-10 Thread Brian Haley
There is an administrative call to remove all neutron resources
associated with a tenant:

(admin) $ neutron purge TENANT

That should do what you want.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1681391

Title:
  network and router persists when deleting project

Status in neutron:
  Invalid

Bug description:
  As an admin when I create a new project Xproject and user X and this
  user log on on his account and create routers and networks to his
  project. After I remove this user and project and I write "openstack
  network list" commamd it shows me that the network created by user X
  still exists in the system although the user and project is deleted. I
  thimk this could be a bug!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1681391/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660317] Re: NotImplementedError for detach_interface in nova-compute during instance deletion

2017-04-10 Thread Chuck Short
** Changed in: nova (Ubuntu)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1660317

Title:
  NotImplementedError for detach_interface in nova-compute during
  instance deletion

Status in Ironic:
  Invalid
Status in OpenStack Compute (nova):
  Fix Released
Status in ironic package in Ubuntu:
  Invalid
Status in nova package in Ubuntu:
  Fix Released

Bug description:
  When baremetal instance deleted there is a harmless but annoying trace
  in nova-compute output.

  nova.compute.manager[26553]: INFO [instance: 
e265be67-9e87-44ea-95b6-641fc2dcaad8] Terminating instance 
[req-5f1eba69-239a-4dd4-8677-f28542b190bc 5a08515f35d749068a6327e387ca04e2 
7d450ecf00d64399aeb93bc122cb6dae - - -]
  nova.compute.resource_tracker[26553]: INFO Auditing locally available compute 
resources for node d02c7361-5e3a-4fdf-89b5-f29b3901f0fc 
[req-d34e2b7b-386f-4a3c-ae85-16860a4a9c28 - - - - -]
  nova.compute.resource_tracker[26553]: INFO Final resource view: 
name=d02c7361-5e3a-4fdf-89b5-f29b3901f0fc phys_ram=0MB used_ram=8096MB 
phys_disk=0GB used_disk=480GB total_vcpus=0 used_vcpus=0 pci_stats=[] 
[req-d34e2b7b-386f-4a3c-ae85-16860a4a9c28 - - - - -]
  nova.compute.resource_tracker[26553]: INFO Compute_service record updated for 
bare-compute1:d02c7361-5e3a-4fdf-89b5-f29b3901f0fc 
[req-d34e2b7b-386f-4a3c-ae85-16860a4a9c28 - - - - -]
  nova.compute.manager[26553]: INFO [instance: 
e265be67-9e87-44ea-95b6-641fc2dcaad8] Neutron deleted interface 
6b563aa7-64d3-4105-9ed5-c764fee7b536; detaching it from the instance and 
deleting it from the info cache [req-fdfeee26-a860-40a5-b2e3-2505973ffa75 
11b95cf353f74788938f580e13b652d8 93c697ef6c2649eb9966900a8d6a73d8 - - -]
  oslo_messaging.rpc.server[26553]: ERROR Exception during message handling 
[req-fdfeee26-a860-40a5-b2e3-2505973ffa75 11b95cf353f74788938f580e13b652d8 
93c697ef6c2649eb9966900a8d6a73d8 - - -]
  oslo_messaging.rpc.server[26553]: TRACE Traceback (most recent call last):
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 133, in 
_process_incoming
  oslo_messaging.rpc.server[26553]: TRACE res = 
self.dispatcher.dispatch(message)
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 150, 
in dispatch
  oslo_messaging.rpc.server[26553]: TRACE return 
self._do_dispatch(endpoint, method, ctxt, args)
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 121, 
in _do_dispatch
  oslo_messaging.rpc.server[26553]: TRACE result = func(ctxt, **new_args)
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/exception_wrapper.py", line 75, in 
wrapped
  oslo_messaging.rpc.server[26553]: TRACE function_name, call_dict, binary)
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  oslo_messaging.rpc.server[26553]: TRACE self.force_reraise()
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  oslo_messaging.rpc.server[26553]: TRACE six.reraise(self.type_, 
self.value, self.tb)
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/exception_wrapper.py", line 66, in 
wrapped
  oslo_messaging.rpc.server[26553]: TRACE return f(self, context, *args, 
**kw)
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6691, in 
external_instance_event
  oslo_messaging.rpc.server[26553]: TRACE event.tag)
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6660, in 
_process_instance_vif_deleted_event
  oslo_messaging.rpc.server[26553]: TRACE 
self.driver.detach_interface(instance, vif)
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/virt/driver.py", line 524, in 
detach_interface
  oslo_messaging.rpc.server[26553]: TRACE raise NotImplementedError()
  oslo_messaging.rpc.server[26553]: TRACE NotImplementedError
  oslo_messaging.rpc.server[26553]: TRACE

  
  Affected version:
  nova 14.0.3
  neutron 6.0.0
  ironic 6.2.1

  configuration for nova-compute:
  compute_driver = ironic.IronicDriver

  Ironic is configured to use neutron networks with generic switch as
  mechanism driver for ML2 pluging.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1660317/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHel

[Yahoo-eng-team] [Bug 1674156] Re: neutron-lbaasv2-agent: TypeError: argument of type 'LoadBalancer' is not iterable

2017-04-10 Thread Chuck Short
** Package changed: neutron-lbaas (Ubuntu) => neutron

** Also affects: neutron-lbaas (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: neutron-lbaas (Ubuntu)
   Status: New => Triaged

** Changed in: neutron-lbaas (Ubuntu)
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1674156

Title:
  neutron-lbaasv2-agent: TypeError: argument of type 'LoadBalancer' is
  not iterable

Status in neutron:
  New
Status in neutron-lbaas package in Ubuntu:
  Triaged

Bug description:
  Is somebody actually running neutron LBaaSv2 with haproxy on Ubuntu
  16.04?

  root@controller1:~# dpkg -l neutron-lbaasv2-agent
  ii  neutron-lbaasv2-agent   2:8.3.0-0ubuntu1  
   all  Neutron is a virtual network service for 
Openstack - LBaaSv2 agent
  root@controller1:~# lsb_release -a
  No LSB modules are available.
  Distributor ID: Ubuntu
  Description:Ubuntu 16.04.2 LTS
  Release:16.04
  Codename:   xenial
  root@controller1:~# 

  
  From /var/log/neutron/neutron-lbaasv2-agent.log:

  2017-03-19 20:39:06.694 4528 INFO neutron.common.config [-] Logging enabled!
  2017-03-19 20:39:06.694 4528 INFO neutron.common.config [-] 
/usr/bin/neutron-lbaasv2-agent version 8.3.0
  2017-03-19 20:39:06.702 4528 WARNING oslo_config.cfg 
[req-9a6a669c-5a5a-4b6c-8c2f-2b1edd9462d9 - - - - -] Option 
"default_ipv6_subnet_pool" from group "DEFAULT" is deprecated for removal.  Its 
value may be silently ignored in the future.
  2017-03-19 20:39:07.033 4528 ERROR 
neutron_lbaas.services.loadbalancer.drivers.haproxy.namespace_driver [-] 

  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager [-] 
Unable to deploy instance for loadbalancer: c49473a7-b956-4a5d-8215-703335eb3320
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager 
Traceback (most recent call last):
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron_lbaas/agent/agent_manager.py", line 
185, in _reload_loadbalancer
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager 
self.device_drivers[driver_name].deploy_instance(loadbalancer)
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 274, in 
inner
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager 
return f(*args, **kwargs)
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 332, in deploy_instance
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager if 
not logical_config or not self._is_active(logical_config):
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 310, in _is_active
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager if 
('vip' not in logical_config or
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager 
TypeError: argument of type 'LoadBalancer' is not iterable
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager 

  /etc/neutron/neutron_lbaas.conf:
  [service_providers]
  
service_provider=LOADBALANCER:Haproxy:neutron_lbaas.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
  service_provider = 
LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

  Looking at the code, I don't see how this can actually works.

  /usr/lib/python2.7/dist-
  
packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py
  +310

  def _is_active(self, logical_config):
  LOG.error(logical_config)
  # haproxy wil be unable to start without any active vip
  ==> if ('vip' not in logical_config or
  (logical_config['vip']['status'] not in
   constants.ACTIVE_PENDING_STATUSES) or
  not logical_config['vip']['admin_state_up']):
  return False

  
  
/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/data_models.py:

  class LoadBalancer(BaseDataModel):

  fields = ['id', 'tenant_id', 'name', 'description', 'vip_subnet_id',
'vip_port_id', 'vip_address', 'provisioning_status',
'operating_status', 'admin_state_up', 'vip_port', 'stats',
'provider', 'listeners', 'pools', 'flavor_id']

  def __init__(self, id=None, tenant_id=None, name=None, description=None,
   vip_subnet_id=None, vip_port_id=None, vip_address=None,
   provisioning_status=

[Yahoo-eng-team] [Bug 1674156] [NEW] neutron-lbaasv2-agent: TypeError: argument of type 'LoadBalancer' is not iterable

2017-04-10 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Is somebody actually running neutron LBaaSv2 with haproxy on Ubuntu
16.04?

root@controller1:~# dpkg -l neutron-lbaasv2-agent
ii  neutron-lbaasv2-agent   2:8.3.0-0ubuntu1
 all  Neutron is a virtual network service for 
Openstack - LBaaSv2 agent
root@controller1:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:Ubuntu 16.04.2 LTS
Release:16.04
Codename:   xenial
root@controller1:~# 


>From /var/log/neutron/neutron-lbaasv2-agent.log:

2017-03-19 20:39:06.694 4528 INFO neutron.common.config [-] Logging enabled!
2017-03-19 20:39:06.694 4528 INFO neutron.common.config [-] 
/usr/bin/neutron-lbaasv2-agent version 8.3.0
2017-03-19 20:39:06.702 4528 WARNING oslo_config.cfg 
[req-9a6a669c-5a5a-4b6c-8c2f-2b1edd9462d9 - - - - -] Option 
"default_ipv6_subnet_pool" from group "DEFAULT" is deprecated for removal.  Its 
value may be silently ignored in the future.
2017-03-19 20:39:07.033 4528 ERROR 
neutron_lbaas.services.loadbalancer.drivers.haproxy.namespace_driver [-] 

2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager [-] Unable 
to deploy instance for loadbalancer: c49473a7-b956-4a5d-8215-703335eb3320
2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager Traceback 
(most recent call last):
2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron_lbaas/agent/agent_manager.py", line 
185, in _reload_loadbalancer
2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager 
self.device_drivers[driver_name].deploy_instance(loadbalancer)
2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 274, in 
inner
2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager return 
f(*args, **kwargs)
2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 332, in deploy_instance
2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager if not 
logical_config or not self._is_active(logical_config):
2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 310, in _is_active
2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager if 
('vip' not in logical_config or
2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager TypeError: 
argument of type 'LoadBalancer' is not iterable
2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager 

/etc/neutron/neutron_lbaas.conf:
[service_providers]
service_provider=LOADBALANCER:Haproxy:neutron_lbaas.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
service_provider = 
LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

Looking at the code, I don't see how this can actually works.

/usr/lib/python2.7/dist-
packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py
+310

def _is_active(self, logical_config):
LOG.error(logical_config)
# haproxy wil be unable to start without any active vip
==> if ('vip' not in logical_config or
(logical_config['vip']['status'] not in
 constants.ACTIVE_PENDING_STATUSES) or
not logical_config['vip']['admin_state_up']):
return False


/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/data_models.py:

class LoadBalancer(BaseDataModel):

fields = ['id', 'tenant_id', 'name', 'description', 'vip_subnet_id',
  'vip_port_id', 'vip_address', 'provisioning_status',
  'operating_status', 'admin_state_up', 'vip_port', 'stats',
  'provider', 'listeners', 'pools', 'flavor_id']

def __init__(self, id=None, tenant_id=None, name=None, description=None,
 vip_subnet_id=None, vip_port_id=None, vip_address=None,
 provisioning_status=None, operating_status=None,
 admin_state_up=None, vip_port=None, stats=None,
 provider=None, listeners=None, pools=None, flavor_id=None):
self.id = id
self.tenant_id = tenant_id
self.name = name
self.description = description
self.vip_subnet_id = vip_subnet_id
self.vip_port_id = vip_port_id
self.vip_address = vip_address
self.operating_status = operating_status
self.provisioning_status = provisioning_status
self.admin_state_up = admin_state_up
self.vip_port = vip_port
self.stats = stats
self.provider = provider
self.listeners = listeners or []
s

[Yahoo-eng-team] [Bug 1678686] Re: keystoneauth doesn't use a default cafile

2017-04-10 Thread Matt Riedemann
** Changed in: nova
   Importance: Undecided => Wishlist

** Changed in: nova
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1678686

Title:
  keystoneauth doesn't use a default cafile

Status in keystoneauth:
  In Progress
Status in OpenStack Compute (nova):
  Opinion

Bug description:
  KeystoneAuth doens't use a default cafile, this causes problems when
  generating a local CA or self signed CA with HTTPS enabled endpoints.
  Even though the CA can be installed locally, keystone auth will still
  fail ssl verification.

  
  =
  2017-04-03 00:54:49.305 545 DEBUG oslo_messaging._drivers.amqpdriver [-] 
received reply msg_id: bb9ce702f5864adf8e4720d2304fcb2a __call__ 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:346
  2017-04-03 00:54:49.337 545 DEBUG cinderclient.v2.client 
[req-7cb00c0e-be3d-4e25-b369-fd8aecbae803 7106629bf3b440a79030d327abd0747e 
2aeed525cd4e4f329b0567be30d3aa6c - default default] REQ: curl -g -i -X GET 
https://openstack.local.net:8776/v2/2aeed525cd4e4f329b0567be30d3aa6c/volumes/ef828539-027c-4daa-9c96-19d2f3cd51e3
 -H "X-Service-Token: {SHA1}77aedd00ae7642ecf44c452749b8b3ed6f45330d" -H 
"User-Agent: python-cinderclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}a91d7c21ef9f2401ffbe691355000e7bcc9d390c" 
_http_log_request /usr/lib/python2.7/site-packages/keystoneauth1/session.py:347
  2017-04-03 00:54:49.442 545 ERROR nova.api.openstack.extensions 
[req-7cb00c0e-be3d-4e25-b369-fd8aecbae803 7106629bf3b440a79030d327abd0747e 
2aeed525cd4e4f329b0567be30d3aa6c - default default] Unexpected exception in API 
method
  2017-04-03 00:54:49.442 545 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
  2017-04-03 00:54:49.442 545 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 338, 
in wrapped
  2017-04-03 00:54:49.442 545 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
  2017-04-03 00:54:49.442 545 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, 
in wrapper
  2017-04-03 00:54:49.442 545 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2017-04-03 00:54:49.442 545 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/volumes.py", line 
338, in create
  2017-04-03 00:54:49.442 545 ERROR nova.api.openstack.extensions 
volume_id, device)
  2017-04-03 00:54:49.442 545 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 204, in inner
  2017-04-03 00:54:49.442 545 ERROR nova.api.openstack.extensions return 
function(self, context, instance, *args, **kwargs)
  2017-04-03 00:54:49.442 545 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 152, in inner
  2017-04-03 00:54:49.442 545 ERROR nova.api.openstack.extensions return 
f(self, context, instance, *args, **kw)
  2017-04-03 00:54:49.442 545 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 3772, in 
attach_volume
  2017-04-03 00:54:49.442 545 ERROR nova.api.openstack.extensions disk_bus, 
device_type)
  2017-04-03 00:54:49.442 545 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 3715, in 
_attach_volume
  2017-04-03 00:54:49.442 545 ERROR nova.api.openstack.extensions 
volume_bdm.destroy()
  2017-04-03 00:54:49.442 545 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2017-04-03 00:54:49.442 545 ERROR nova.api.openstack.extensions 
self.force_reraise()
  2017-04-03 00:54:49.442 545 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2017-04-03 00:54:49.442 545 ERROR nova.api.openstack.extensions 
six.reraise(self.type_, self.value, self.tb)
  2017-04-03 00:54:49.442 545 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 3711, in 
_attach_volume
  2017-04-03 00:54:49.442 545 ERROR nova.api.openstack.extensions 
self._check_attach_and_reserve_volume(context, volume_id, instance)
  2017-04-03 00:54:49.442 545 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 3693, in 
_check_attach_and_reserve_volume
  2017-04-03 00:54:49.442 545 ERROR nova.api.openstack.extensions volume = 
self.volume_api.get(context, volume_id)
  2017-04-03 00:54:49.442 545 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 177, in wrapper
  2017-04-03 00:54:

[Yahoo-eng-team] [Bug 1657452] Re: Incompatibility with python-webob 1.7.0

2017-04-10 Thread Chuck Short
** Changed in: glance (Ubuntu)
   Status: Triaged => Fix Committed

** Changed in: glance (Ubuntu)
   Status: Fix Committed => Fix Released

** Changed in: keystone (Ubuntu)
   Status: Triaged => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1657452

Title:
  Incompatibility with python-webob 1.7.0

Status in Glance:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Compute (nova):
  Confirmed
Status in oslo.middleware:
  Confirmed
Status in glance package in Ubuntu:
  Fix Released
Status in keystone package in Ubuntu:
  Fix Committed
Status in nova package in Ubuntu:
  Triaged
Status in python-oslo.middleware package in Ubuntu:
  Triaged

Bug description:
  
  
keystone.tests.unit.test_v3_federation.WebSSOTests.test_identity_provider_specific_federated_authentication
  
---

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "keystone/tests/unit/test_v3_federation.py", line 4067, in 
test_identity_provider_specific_federated_authentication
  self.PROTOCOL)
File "keystone/federation/controllers.py", line 345, in 
federated_idp_specific_sso_auth
  return self.render_html_response(host, token_id)
File "keystone/federation/controllers.py", line 357, in 
render_html_response
  headerlist=headers)
File "/usr/lib/python2.7/dist-packages/webob/response.py", line 310, in 
__init__
  "You cannot set the body to a text value without a "
  TypeError: You cannot set the body to a text value without a charset

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1657452/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1681440] [NEW] QoS policy object can't be suitable with 1.2 version of object

2017-04-10 Thread Slawek Kaplonski
Public bug reported:

In
https://github.com/openstack/neutron/blob/master/neutron/objects/qos/policy.py#L220
there is no function to make QoS policy object compatible with version
1.2 and higher (append QoSMinimumBandwidthLimit rules to policy)

** Affects: neutron
 Importance: Undecided
 Assignee: Slawek Kaplonski (slaweq)
 Status: New


** Tags: qos

** Changed in: neutron
 Assignee: (unassigned) => Slawek Kaplonski (slaweq)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1681440

Title:
  QoS policy object can't be suitable with 1.2 version of object

Status in neutron:
  New

Bug description:
  In
  
https://github.com/openstack/neutron/blob/master/neutron/objects/qos/policy.py#L220
  there is no function to make QoS policy object compatible with version
  1.2 and higher (append QoSMinimumBandwidthLimit rules to policy)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1681440/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1680765] Re: (HTTP 500)

2017-04-10 Thread Matt Riedemann
It also looks like you're using glance v1. Nova deprecated support for
glance v1 in Newton and defaulted to using glance v2. Make sure you have
the config correct and see if you are running glance v1 and/or v2 API
endpoints.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1680765

Title:
   (HTTP 500)

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  With Debian 8.6 jessie, and in Newton, when executing the following
  command to create a test instance in Openstack (following the official
  Openstack newton for debain guide):

  
  openstack server create --flavor m1.nano --image cirros --security-group 
default --key-name mykey provider-instance

  
  I get the following error:

  Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ 
and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-64808b32-4971-4ab0-8a80-540b1f6cb527)

  
  Here's the nova-api-log as suggested:

  2017-04-07 11:08:06.195 30396 ERROR nova.api.openstack.extensions 
[req-ba2fba0c-7800-4886-9837-5d5ab0381eff c70c1c93d7b940dfb4255d8421d12009 
5caa95a73f114f0584c9cd8ed3b02d2f - - -] Unexpected exception in API method
  2017-04-07 11:08:06.195 30396 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
  2017-04-07 11:08:06.195 30396 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/extensions.py", line 478, 
in wrapped
  2017-04-07 11:08:06.195 30396 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
  2017-04-07 11:08:06.195 30396 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/images.py", line 
87, in show
  2017-04-07 11:08:06.195 30396 ERROR nova.api.openstack.extensions image = 
self._image_api.get(context, id)
  2017-04-07 11:08:06.195 30396 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/image/api.py", line 93, in get
  2017-04-07 11:08:06.195 30396 ERROR nova.api.openstack.extensions 
show_deleted=show_deleted)
  2017-04-07 11:08:06.195 30396 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 333, in show
  2017-04-07 11:08:06.195 30396 ERROR nova.api.openstack.extensions 
_reraise_translated_image_exception(image_id)
  2017-04-07 11:08:06.195 30396 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 682, in 
_reraise_translated_image_exception
  2017-04-07 11:08:06.195 30396 ERROR nova.api.openstack.extensions 
six.reraise(new_exc, None, exc_trace)
  2017-04-07 11:08:06.195 30396 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 331, in show
  2017-04-07 11:08:06.195 30396 ERROR nova.api.openstack.extensions image = 
self._client.call(context, version, 'get', image_id)
  2017-04-07 11:08:06.195 30396 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 250, in call
  2017-04-07 11:08:06.195 30396 ERROR nova.api.openstack.extensions result 
= getattr(client.images, method)(*args, **kwargs)
  2017-04-07 11:08:06.195 30396 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/glanceclient/v1/images.py", line 132, in get
  2017-04-07 11:08:06.195 30396 ERROR nova.api.openstack.extensions % 
urlparse.quote(str(image_id)))
  2017-04-07 11:08:06.195 30396 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 284, in 
head
  2017-04-07 11:08:06.195 30396 ERROR nova.api.openstack.extensions return 
self._request('HEAD', url, **kwargs)
  2017-04-07 11:08:06.195 30396 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 279, in 
_request
  2017-04-07 11:08:06.195 30396 ERROR nova.api.openstack.extensions resp, 
body_iter = self._handle_response(resp)
  2017-04-07 11:08:06.195 30396 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 107, in 
_handle_response
  2017-04-07 11:08:06.195 30396 ERROR nova.api.openstack.extensions raise 
exc.from_response(resp, resp.content)
  2017-04-07 11:08:06.195 30396 ERROR nova.api.openstack.extensions 
HTTPInternalServerError: HTTPInternalServerError (HTTP 500)
  2017-04-07 11:08:06.195 30396 ERROR nova.api.openstack.extensions 
  2017-04-07 11:08:06.197 30396 INFO nova.api.openstack.wsgi 
[req-ba2fba0c-7800-4886-9837-5d5ab0381eff c70c1c93d7b940dfb4255d8421d12009 
5caa95a73f114f0584c9cd8ed3b02d2f - - -] HTTP exception thrown: Unexpected API 
Error. Please report this at http://bugs.launchpad.net/nova/ and attach the 
Nova API log if possible.
  
  2017

[Yahoo-eng-team] [Bug 1681431] Re: "nova-manage db sync" fails from Mitaka to Newton because deleted compute nodes

2017-04-10 Thread Belmiro Moreira
*** This bug is a duplicate of bug 1665719 ***
https://bugs.launchpad.net/bugs/1665719

Already fixed #1665719

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1681431

Title:
  "nova-manage db sync" fails from Mitaka to Newton because deleted
  compute nodes

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Description
  ===
  "nova-manage db sync" fails from Mitaka to Newton because deleted compute 
nodes

  DB migration from Mitaka to Newton fails in migration 330 with:
  "error: There are still XX unmigrated records in the compute_nodes table. 
Migration cannot continue until all records have been migrated."

  This migration checks if there are compute_nodes without a UUID.
  However, "nova-manage db online_data_migrations" in Mitaka only
  migrates non deleted compute_node entries.

  
  Steps to reproduce
  ==
  1) Have a nova Mitaka DB (319)
  2) Make sure you have a deleted entry (deleted>0) in "compute_nodes" table.
  3) Make sure all data migrations are done in Mitaka. ("nova-manage db 
online_data_migrations")
  4) Sync the DB for Newton. ("nova-manage db sync" in a Newton node)

  
  Expected result
  ===
  DB migrations succeed (334)

  
  Actual result
  =
  DB doesn't migrate (329)

  
  Environment
  ===
  Tested with "13.1.2" and "14.0.3".

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1681431/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1681339] Re: Image name not populated against an Instance

2017-04-10 Thread Matt Riedemann
This sounds like a Horizon issue to triage. The server response should
have an image id in it, and some tools, like nova CLI, will show the
image name along with the image id, but I don't know about Horizon.

** Also affects: horizon
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1681339

Title:
  Image name not populated against an Instance

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Compute (nova):
  Incomplete

Bug description:
  Newton Openstack:
  ---
  On launching an instance using an image, once the image is created, no image 
name is populated against it the instance list or instance details. 

  
  Mitaka Openstack:
  ---
  On the Mitaka dashboard, Source panel on Launch instance window gives the 
option to create and attach a new volume of desired size along with the image 
used to create an instance. However, when I try to create an instance with an 
image and along with a new volume attached, the instance gets created 
successfully but does not show the image details.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1681339/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1681431] Re: "nova-manage db sync" fails from Mitaka to Newton because deleted compute nodes

2017-04-10 Thread Matt Riedemann
*** This bug is a duplicate of bug 1665719 ***
https://bugs.launchpad.net/bugs/1665719

This was already fixed, the newton patch is here:

https://review.openstack.org/#/c/435620/

** This bug has been marked a duplicate of bug 1665719
   330_enforce_mitaka_online_migrations blocks on soft-deleted records

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1681431

Title:
  "nova-manage db sync" fails from Mitaka to Newton because deleted
  compute nodes

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  "nova-manage db sync" fails from Mitaka to Newton because deleted compute 
nodes

  DB migration from Mitaka to Newton fails in migration 330 with:
  "error: There are still XX unmigrated records in the compute_nodes table. 
Migration cannot continue until all records have been migrated."

  This migration checks if there are compute_nodes without a UUID.
  However, "nova-manage db online_data_migrations" in Mitaka only
  migrates non deleted compute_node entries.

  
  Steps to reproduce
  ==
  1) Have a nova Mitaka DB (319)
  2) Make sure you have a deleted entry (deleted>0) in "compute_nodes" table.
  3) Make sure all data migrations are done in Mitaka. ("nova-manage db 
online_data_migrations")
  4) Sync the DB for Newton. ("nova-manage db sync" in a Newton node)

  
  Expected result
  ===
  DB migrations succeed (334)

  
  Actual result
  =
  DB doesn't migrate (329)

  
  Environment
  ===
  Tested with "13.1.2" and "14.0.3".

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1681431/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1681431] [NEW] "nova-manage db sync" fails from Mitaka to Newton because deleted compute nodes

2017-04-10 Thread Belmiro Moreira
Public bug reported:

Description
===
"nova-manage db sync" fails from Mitaka to Newton because deleted compute nodes

DB migration from Mitaka to Newton fails in migration 330 with:
"error: There are still XX unmigrated records in the compute_nodes table. 
Migration cannot continue until all records have been migrated."

This migration checks if there are compute_nodes without a UUID.
However, "nova-manage db online_data_migrations" in Mitaka only migrates
non deleted compute_node entries.


Steps to reproduce
==
1) Have a nova Mitaka DB (319)
2) Make sure you have a deleted entry (deleted>0) in "compute_nodes" table.
3) Make sure all data migrations are done in Mitaka. ("nova-manage db 
online_data_migrations")
4) Sync the DB for Newton. ("nova-manage db sync" in a Newton node)


Expected result
===
DB migrations succeed (334)


Actual result
=
DB doesn't migrate (329)


Environment
===
Tested with "13.1.2" and "14.0.3".

** Affects: nova
 Importance: Undecided
 Assignee: Belmiro Moreira (moreira-belmiro-email-lists)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Belmiro Moreira (moreira-belmiro-email-lists)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1681431

Title:
  "nova-manage db sync" fails from Mitaka to Newton because deleted
  compute nodes

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  "nova-manage db sync" fails from Mitaka to Newton because deleted compute 
nodes

  DB migration from Mitaka to Newton fails in migration 330 with:
  "error: There are still XX unmigrated records in the compute_nodes table. 
Migration cannot continue until all records have been migrated."

  This migration checks if there are compute_nodes without a UUID.
  However, "nova-manage db online_data_migrations" in Mitaka only
  migrates non deleted compute_node entries.

  
  Steps to reproduce
  ==
  1) Have a nova Mitaka DB (319)
  2) Make sure you have a deleted entry (deleted>0) in "compute_nodes" table.
  3) Make sure all data migrations are done in Mitaka. ("nova-manage db 
online_data_migrations")
  4) Sync the DB for Newton. ("nova-manage db sync" in a Newton node)

  
  Expected result
  ===
  DB migrations succeed (334)

  
  Actual result
  =
  DB doesn't migrate (329)

  
  Environment
  ===
  Tested with "13.1.2" and "14.0.3".

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1681431/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1680840] Re: Some pages have no page title since used the wrong variable.

2017-04-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/454718
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=b9cc9fd4144e5dad094bc2507d6ecc6408bce7ed
Submitter: Jenkins
Branch:master

commit b9cc9fd4144e5dad094bc2507d6ecc6408bce7ed
Author: wei.ying 
Date:   Fri Apr 7 21:12:16 2017 +0800

Fix incorrect window title in admin snapshots and volume types

They all missing the correct title before "- OpenStack Dashboard"
since used the wrong variable.

Change-Id: Ic03ee11a4492ca2084078faef7dc1f0253b3b9eb
Closes-Bug: #1680840


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1680840

Title:
  Some pages have no page title since used the wrong variable.

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Env: devstack master branch

  In path of admin snapshots and admin volume types window title is only
  " - OpenStack Dashboard"

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1680840/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1681163] Re: System Information orchestration policy rule error.

2017-04-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/455031
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=038b991da77985dd826c8c68f8fc7d395ad478cf
Submitter: Jenkins
Branch:master

commit 038b991da77985dd826c8c68f8fc7d395ad478cf
Author: wei.ying 
Date:   Sun Apr 9 09:58:19 2017 +0800

Correct the word orchestation to orchestration

Heat policy rule key is orchestration.

Change-Id: Iad00116ff92247961283d3c5ad6c6c688ea58881
Closes-Bug: #1681163


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1681163

Title:
  System Information orchestration policy rule error.

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Env: devstack master branch

  path:openstack_dashboard/dashboards/admin/info/panel.py L30

  orchestation->orchestration

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1681163/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1681391] [NEW] network and router persists when deleting project

2017-04-10 Thread Amir DHAOUI
Public bug reported:

As an admin when I create a new project Xproject and user X and this
user log on on his account and create routers and networks to his
project. After I remove this user and project and I write "openstack
network list" commamd it shows me that the network created by user X
still exists in the system although the user and project is deleted. I
thimk this could be a bug!

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1681391

Title:
  network and router persists when deleting project

Status in neutron:
  New

Bug description:
  As an admin when I create a new project Xproject and user X and this
  user log on on his account and create routers and networks to his
  project. After I remove this user and project and I write "openstack
  network list" commamd it shows me that the network created by user X
  still exists in the system although the user and project is deleted. I
  thimk this could be a bug!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1681391/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1681348] [NEW] keystone list project api returns empty if "?name=" is added as url parameter

2017-04-10 Thread Xiaoliang Li
Public bug reported:

request: https://{{keystone_ip}}:5000/v3/projects?name=
expect: returns all projects of current user.
but: return empty.

Other OpenStack components obey this convention properly, so keystone is
inconsistent with them.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1681348

Title:
  keystone list project api returns empty if "?name=" is added as url
  parameter

Status in OpenStack Identity (keystone):
  New

Bug description:
  request: https://{{keystone_ip}}:5000/v3/projects?name=
  expect: returns all projects of current user.
  but: return empty.

  Other OpenStack components obey this convention properly, so keystone
  is inconsistent with them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1681348/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1681339] [NEW] Image name not populated against an Instance

2017-04-10 Thread divya
Public bug reported:

Newton Openstack:
---
On launching an instance using an image, once the image is created, no image 
name is populated against it the instance list or instance details. 


Mitaka Openstack:
---
On the Mitaka dashboard, Source panel on Launch instance window gives the 
option to create and attach a new volume of desired size along with the image 
used to create an instance. However, when I try to create an instance with an 
image and along with a new volume attached, the instance gets created 
successfully but does not show the image details.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1681339

Title:
  Image name not populated against an Instance

Status in OpenStack Compute (nova):
  New

Bug description:
  Newton Openstack:
  ---
  On launching an instance using an image, once the image is created, no image 
name is populated against it the instance list or instance details. 

  
  Mitaka Openstack:
  ---
  On the Mitaka dashboard, Source panel on Launch instance window gives the 
option to create and attach a new volume of desired size along with the image 
used to create an instance. However, when I try to create an instance with an 
image and along with a new volume attached, the instance gets created 
successfully but does not show the image details.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1681339/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp