[Yahoo-eng-team] [Bug 1449858] [NEW] nova don't create local disk if specify device=vda in block_device_mapping_v2

2015-04-28 Thread Eli Qiao
Public bug reported:

I wish to create a instance with local disk(1G) and attaching a
blank(new created) volume(2G).

using follow nova command line api, but I specify the bdm's device=vda

nova --debug boot --image baaece45-58a2-4e4c-b8b7-7b3f12fe4bc6 --flavor
1 --nic net-id=9af3d913-dd65-4864-88b5-cd42bce3f672 --block-device
source=blank,dest=volume,device=vda,size=2 test12

I got a instance , by checking domain xml
I only got 1 block device(2G), but no local disk was created.


expect result.
1. raise badrequest if sepcify device=vda in block-device.
or
2. just ignore device=vda in block-device and create 2 device for this instance.

I prefer option 1.

** Affects: nova
 Importance: Undecided
 Assignee: Eli Qiao (taget-9)
 Status: New

** Description changed:

  I wish to create a instance with local disk(1G) and attaching a
  blank(new created) volume(2G).
  
  using follow nova command line api, but I specify the bdm's device=vda
  
  nova --debug boot --image baaece45-58a2-4e4c-b8b7-7b3f12fe4bc6 --flavor
  1 --nic net-id=9af3d913-dd65-4864-88b5-cd42bce3f672 --block-device
  source=blank,dest=volume,device=vda,size=2 test12
  
  I got a instance , by checking domain xml
  I only got 1 block device(2G), but no local disk was created.
  
  
  expect result.
  1. raise badrequest if sepcify device=vda in block-device.
  or
  2. just ignore device=vda in block-device and create 2 device for this 
instance.
  
- I am prefer option 1.
+ I prefer option 1.

** Changed in: nova
 Assignee: (unassigned) => Eli Qiao (taget-9)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1449858

Title:
  nova don't create local disk if specify device=vda in
  block_device_mapping_v2

Status in OpenStack Compute (Nova):
  New

Bug description:
  I wish to create a instance with local disk(1G) and attaching a
  blank(new created) volume(2G).

  using follow nova command line api, but I specify the bdm's device=vda

  nova --debug boot --image baaece45-58a2-4e4c-b8b7-7b3f12fe4bc6
  --flavor 1 --nic net-id=9af3d913-dd65-4864-88b5-cd42bce3f672 --block-
  device source=blank,dest=volume,device=vda,size=2 test12

  I got a instance , by checking domain xml
  I only got 1 block device(2G), but no local disk was created.

  
  expect result.
  1. raise badrequest if sepcify device=vda in block-device.
  or
  2. just ignore device=vda in block-device and create 2 device for this 
instance.

  I prefer option 1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1449858/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420192] Re: Nova interface-attach command has optional arguments to add network details. It should be positional arguments otherwise command fails.

2015-04-28 Thread Park
I tried to reproduce the issue with nova-network
from the API layer, this issue should be fixed via this bug:
https://bugs.launchpad.net/nova/+bug/1428481

park@park-ThinkPad-T420:/opt/stack/nova$ nova interface-attach vm1
ERROR (HTTPNotImplemented): Network driver does not support this function. 
(HTTP 501) 
park@park-ThinkPad-T420:/opt/stack/nova$ nova interface-list vm1
ERROR (HTTPNotImplemented): Network driver does not support this function. 
(HTTP 501) 

so I will close this bug

** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1420192

Title:
  Nova interface-attach command has optional arguments to add network
  details. It should be positional arguments otherwise command fails.

Status in OpenStack Compute (Nova):
  Fix Released
Status in Python client library for Nova:
  Confirmed

Bug description:
  On execution of nova interface-attach command without optional
  arguments command fails.

  root@ubuntu:~# nova interface-attach vm1
  ERROR (ClientException): Failed to attach interface (HTTP 500) (Request-ID: 
req-ebca9af6-8d2f-4f68-8a80-ad002b03c2fc)
  root@ubuntu:~# 

  To add a network interface atleast one amongst the optional arguments
  must be provided. Thus, help message needs to be modified.

  root@ubuntu:~# nova help interface-attach
  usage: nova interface-attach [--port-id ] [--net-id ]
   [--fixed-ip ]
   

  Attach a network interface to a server.

  Positional arguments:
   Name or ID of server.

  Optional arguments:
--port-id Port ID.
--net-id   Network ID
--fixed-ip   Requested fixed IP.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1420192/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449850] [NEW] Join multiple criteria together

2015-04-28 Thread Dave Chen
Public bug reported:

SQLAlchemy supports to join multiple criteria together, this is
provided to build the query statements when there is multiple filtering
criterion instead of constructing query statement one by one,  just
*assume* SQLAlchemy prefer to use it in this way, and the code looks
more clean.

** Affects: keystone
 Importance: Undecided
 Assignee: Dave Chen (wei-d-chen)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Dave Chen (wei-d-chen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1449850

Title:
  Join multiple criteria together

Status in OpenStack Identity (Keystone):
  New

Bug description:
  SQLAlchemy supports to join multiple criteria together, this is
  provided to build the query statements when there is multiple
  filtering criterion instead of constructing query statement one by
  one,  just *assume* SQLAlchemy prefer to use it in this way, and the
  code looks more clean.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1449850/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449819] [NEW] When unselect"active DHCP", VM still can get ip address

2015-04-28 Thread haoliang
Public bug reported:

1.1 Under test tenement,create network:net1,subnet:subnet1,network 
address:192.168.1.0/24 and other keep default
1.2 Create router:R1,R1 inner interface relate to subnet1 and set outer network 
for R1
1.3 Create VM1-1,choose subnet1,security group choose default,image is centos
1.4 Goto VM1-1 control station,we can see VM1-1 get ip address from DHCP Server
1.5 Edit subnet1,unselect"active DHCP",input "service network restart"in VM1-1 
control station
in result:VM1-1 still can get ip address and ping subnet1 gw successfully
capture in VM1-1 interface,we can't see any offer packets from DHCP Server

** Affects: neutron
 Importance: Undecided
 Status: New

** Attachment added: "dhcp.pcap"
   https://bugs.launchpad.net/bugs/1449819/+attachment/4387187/+files/dhcp.pcap

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1449819

Title:
  When unselect"active DHCP",VM still can get ip address

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  1.1 Under test tenement,create network:net1,subnet:subnet1,network 
address:192.168.1.0/24 and other keep default
  1.2 Create router:R1,R1 inner interface relate to subnet1 and set outer 
network for R1
  1.3 Create VM1-1,choose subnet1,security group choose default,image is centos
  1.4 Goto VM1-1 control station,we can see VM1-1 get ip address from DHCP 
Server
  1.5 Edit subnet1,unselect"active DHCP",input "service network restart"in 
VM1-1 control station
  in result:VM1-1 still can get ip address and ping subnet1 gw successfully
  capture in VM1-1 interface,we can't see any offer packets from DHCP Server

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1449819/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1248017] Re: linuxbridge plugin can not run alone in havana

2015-04-28 Thread Jiajun Liu
** Changed in: neutron
   Status: Incomplete => Fix Released

** Changed in: neutron
 Assignee: Jiajun Liu (ljjjustin) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1248017

Title:
  linuxbridge plugin can not run alone in havana

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  I tested the newly released havana neutron. I found that l3 and dhcp agent 
can not work with linuxbridge plugin.
  the trackback info is as follows:
  Traceback (most recent call last):
File "/usr/bin/neutron-dhcp-agent", line 6, in 
  from neutron.agent.dhcp_agent import main
File "/usr/lib/python2.6/site-packages/neutron/agent/dhcp_agent.py", line 
27, in 
  from neutron.agent.linux import interface
File "/usr/lib/python2.6/site-packages/neutron/agent/linux/interface.py", 
line 25, in 
  from neutron.agent.linux import ovs_lib
File "/usr/lib/python2.6/site-packages/neutron/agent/linux/ovs_lib.py", 
line 27, in 
  from neutron.plugins.openvswitch.common import constants
  ImportError: No module named openvswitch.common

  I have just installed linuxbridge plugin,  so there no any source code
  from openvswitch plugin, so then l3 and dhcp agent will  complain can
  not import constants from neutron.plugins.openvswitch.common module.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1248017/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449811] [NEW] nova lock/unlock api return code is not accurate

2015-04-28 Thread Eli Qiao
Public bug reported:

Currently, lock server api only returns 202(accepted) and 404(not found).
The api return code is not accurate.   
   
The lock/unlock api in compute-api is a sync function, so the return code  
of nova-api should be: 
   
* 200 : successfully lock/unlock an instance   
* 404 : instance not found
* 409 : locking/unlocking a locked/unlocked instance

** Affects: nova
 Importance: Undecided
 Assignee: Eli Qiao (taget-9)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Eli Qiao (taget-9)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1449811

Title:
  nova lock/unlock api return code is not accurate

Status in OpenStack Compute (Nova):
  New

Bug description:
  Currently, lock server api only returns 202(accepted) and 404(not found).
  The api return code is not accurate.  
 

 
  The lock/unlock api in compute-api is a sync function, so the return code 
 
  of nova-api should be:
 

 
  * 200 : successfully lock/unlock an instance  
 
  * 404 : instance not found
  * 409 : locking/unlocking a locked/unlocked instance

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1449811/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449789] [NEW] pyscss breaks CSS ordering

2015-04-28 Thread Richard Jones
Public bug reported:

The current version of pyscss used by Horizon (pyScss>=1.2.1,<1.3) has a
bug which causes the ordering of imports to not match the order of
@import statements in the scss files. This causes a major problem in
Bootstrap because the ordering of these components:

@import "navs";
@import "navbar";

Is broken, with the navbar components appearing before the navs
components in the compiled CSS. This in turn breaks rendering since the
navbar has rules to override the navs rules.

Upgrading to pyscss 1.3+ fixes this issue (pyscss bug report
https://github.com/Kronuz/pyScss/issues/274)

** Affects: horizon
 Importance: High
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1449789

Title:
  pyscss breaks CSS ordering

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The current version of pyscss used by Horizon (pyScss>=1.2.1,<1.3) has
  a bug which causes the ordering of imports to not match the order of
  @import statements in the scss files. This causes a major problem in
  Bootstrap because the ordering of these components:

  @import "navs";
  @import "navbar";

  Is broken, with the navbar components appearing before the navs
  components in the compiled CSS. This in turn breaks rendering since
  the navbar has rules to override the navs rules.

  Upgrading to pyscss 1.3+ fixes this issue (pyscss bug report
  https://github.com/Kronuz/pyScss/issues/274)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1449789/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444745] Re: Updates RPC version alias for kilo

2015-04-28 Thread Thierry Carrez
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1444745

Title:
  Updates RPC version alias for kilo

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  The rpc version alias need to be updated for kilo. Those alias is
  needed for rolling upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1444745/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1448075] Re: Recent compute RPC API version bump missed out on security group parts of the api

2015-04-28 Thread Thierry Carrez
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1448075

Title:
  Recent compute RPC API version bump missed out on security group parts
  of the api

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  Because compute and security group client side RPC API:s both share
  the same target, they need to be bumped together like what has been
  done previously in 6ac1a84614dc6611591cb1f1ec8cce737972d069 and
  6b238a5c9fcef0e62cefbaf3483645f51554667b.

  In fact, having two different client side RPC API:s for the same
  target is of little value and to avoid future mistakes should really
  be merged into one.

  The impact of this bug is that all security group related calls will
  start to fail in an upgrade scenario.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1448075/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447132] Re: nova-manage db migrate_flavor_data doesn't do instances not in instance_extra

2015-04-28 Thread Thierry Carrez
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1447132

Title:
  nova-manage db migrate_flavor_data doesn't do instances not in
  instance_extra

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  nova-manage db migrate_flavor_data selects all of the instances by
  joining them to the instance_extra table and then checks which ones
  have flavor information in the metadata table or the extras table.
  However, if an instance isn't in instance_extra (for example, it
  hasn't been written to since the creation of the extras table) then it
  won't be migrated (even if it isn't deleted AND has flavor info in the
  metadata table).

  migrate_flavor_data should select all of the instances in the metadata
  table with flavor information and migrate those.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1447132/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449742] Re: InvalidUUID exception after vifs plug failed

2015-04-28 Thread Francois Deppierraz
** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1449742

Title:
  InvalidUUID exception after vifs plug failed

Status in OpenStack Compute (Nova):
  New
Status in nova package in Ubuntu:
  New

Bug description:
  An openstack deployment under Ubuntu 14.04 which was upgraded all the
  way from havana. After a recent upgrade of nova-compute from
  2014.1.3-0ubuntu2 to 2014.1.4-0ubuntu2, nova-compute doesn't start
  anymore on one of the compute nodes.

  My guess is that the root cause here is missing data in network_info
  for this particular instance. However, nova-compute should not exit in
  this case.

  2015-04-28 22:25:05.021 10017 AUDIT nova.service [-] Starting compute node 
(version 2014.1.4)
  2015-04-28 22:25:05.022 10017 DEBUG nova.virt.libvirt.driver [-] Connecting 
to libvirt: qemu:///system _get_new_connection 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py:672
  2015-04-28 22:25:05.058 10017 DEBUG nova.virt.libvirt.driver [-] Registering 
for lifecycle events  _get_new_connection 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py:688
  2015-04-28 22:25:05.061 10017 DEBUG nova.virt.libvirt.driver [-] Registering 
for connection events:  _get_new_connection 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py:700
  2015-04-28 22:25:05.079 10017 DEBUG nova.virt.libvirt.config [-] Generated 
XML ('\n  x86_64\n  SandyBridge\n  
Intel\n  \n  
\n  \n  \n  
\n  \n  \n 
 \n  \n  \n  \n  \n  \n  \n  \n  \n  \n  \n  \n  \n  \n\n',)  
to_xml /usr/lib/python2.7/dist-packages/nova/virt/libvirt/config.py:71
  2015-04-28 22:25:05.086 10017 DEBUG nova.virt.libvirt.driver [-] Starting 
native event thread _init_events 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py:625
  2015-04-28 22:25:05.087 10017 DEBUG nova.virt.libvirt.driver [-] Starting 
green dispatch thread _init_events 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py:630
  2015-04-28 22:25:05.290 10017 DEBUG nova.compute.manager [-] [instance: 
6c33e4b2-009b-49f3-8b7a-8b1dd5cce344] Checking state _get_power_state 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:1088
  2015-04-28 22:25:05.296 10017 DEBUG nova.compute.manager [-] [instance: 
6c33e4b2-009b-49f3-8b7a-8b1dd5cce344] Checking state _get_power_state 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:1088
  2015-04-28 22:25:05.302 10017 DEBUG nova.virt.libvirt.vif [-] vif_type=ovs 
instance= 
vif=VIF({'ovs_interfaceid': u'61c90c3f-24fc-4a58-8f6f-a7caf485fe50', 'network': 
Network({'bridge': u'br-int', 'subnets': [Subnet({'ips': [FixedIP({'meta': {}, 
'version': 4, 'type': u'fixed', 'floating_ips': [], 'address': 
u'10.27.72.8'})], 'version': 4, 'meta': {u'dhcp_server': u'10.27.72.11'}, 
'dns': [IP({'meta': {}, 'version': 4, 'type': u'dns', 'address': 
u'10.26.10.1'}), IP({'meta': {}, 'version': 4, 'type': u'dns', 'address': 
u'10.27.21.2'})], 'routes': [], 'cidr': u'10.27.72.0/24', 'gateway': 
IP({'meta': {}, 'version': 4, 'type': u'gateway', 'address': 
u'10.27.72.1'})})], 'meta': {u'injected': False, u'tenant_id': 
u'6966cc471a354147901586eed21e4c4e'}, 'id': 
u'1e5a7f58-a380-4636-9857-4e707e608530', 'label': u'c2c-vlan72'}), 'devname': 
u'tap61c90c3f-24', 'qbh_params': None, 'meta': {}, 'details': {}, 'address': 
u'fa:16:3e:a4:d0:75', 'acti
 ve': True, 'type': u'ovs', 'id': u'61c90c3f-24fc-4a58-8f6f-a7caf485fe50', 
'qbg_params': None}) plug 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py:592
  2015-04-28 22:25:05.306 10017 DEBUG nova.compute.manager [-] [instance: 
6c33e4b2-009b-49f3-8b7a-8b1dd5cce344] Checking state _get_power_state 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:1088
  2015-04-28 22:25:05.313 10017 DEBUG nova.compute.manager [-] [instance: 
6c33e4b2-009b-49f3-8b7a-8b1dd5cce344] Current state is 1, state in DB is 1. 
_init_instance /usr/lib/python2.7/dist-packages/nova/compute/manager.py:966
  2015-04-28 22:25:05.314 10017 DEBUG nova.compute.manager [-] [instance: 
08991862-8385-41fa-9ac8-b59dea1e8e61] Checking state _get_power_state 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:1088
  2015-04-28 22:25:05.320 10017 DEBUG nova.compute.manager [-] [instance: 
08991862-8385-41fa-9ac8-b59dea1e8e61] Checking state _get_power_state 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:1088
  2015-04-28 22:25:05.327 10017 DEBUG nova.virt.libvirt.vif [-] vif_type=None 
instance= 
vif=VIF({'ovs_interfaceid': None, 'network': Network({'bridge': None, 
'subnets': [Subnet({'ips': [FixedIP({'meta': {}, 'version': 4, 'type': 
u'fixed', 'floating_ips': [], 'address': u'10.27.71.16'})], 'version': 4, 
'meta': {u'dhcp_server': u'10.27.71.23'}, 'dns': [IP({'meta': {}, 'version': 4, 
'type': u'dns', 'address': u'10.26.10.1'}), IP({'meta': {}, 'version': 4, 
'type': u'dns', 'addres

[Yahoo-eng-team] [Bug 1449742] [NEW] InvalidUUID exception after vifs plug failed

2015-04-28 Thread Francois Deppierraz
Public bug reported:

An openstack deployment under Ubuntu 14.04 which was upgraded all the
way from havana. After a recent upgrade of nova-compute from
2014.1.3-0ubuntu2 to 2014.1.4-0ubuntu2, nova-compute doesn't start
anymore on one of the compute nodes.

My guess is that the root cause here is missing data in network_info for
this particular instance. However, nova-compute should not exit in this
case.

2015-04-28 22:25:05.021 10017 AUDIT nova.service [-] Starting compute node 
(version 2014.1.4)
2015-04-28 22:25:05.022 10017 DEBUG nova.virt.libvirt.driver [-] Connecting to 
libvirt: qemu:///system _get_new_connection 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py:672
2015-04-28 22:25:05.058 10017 DEBUG nova.virt.libvirt.driver [-] Registering 
for lifecycle events  _get_new_connection 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py:688
2015-04-28 22:25:05.061 10017 DEBUG nova.virt.libvirt.driver [-] Registering 
for connection events:  _get_new_connection 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py:700
2015-04-28 22:25:05.079 10017 DEBUG nova.virt.libvirt.config [-] Generated XML 
('\n  x86_64\n  SandyBridge\n  
Intel\n  \n  
\n  \n  \n  
\n  \n  \n 
 \n  \n  \n  \n  \n  \n  \n  \n  \n  \n  \n  \n  \n  \n\n',)  
to_xml /usr/lib/python2.7/dist-packages/nova/virt/libvirt/config.py:71
2015-04-28 22:25:05.086 10017 DEBUG nova.virt.libvirt.driver [-] Starting 
native event thread _init_events 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py:625
2015-04-28 22:25:05.087 10017 DEBUG nova.virt.libvirt.driver [-] Starting green 
dispatch thread _init_events 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py:630
2015-04-28 22:25:05.290 10017 DEBUG nova.compute.manager [-] [instance: 
6c33e4b2-009b-49f3-8b7a-8b1dd5cce344] Checking state _get_power_state 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:1088
2015-04-28 22:25:05.296 10017 DEBUG nova.compute.manager [-] [instance: 
6c33e4b2-009b-49f3-8b7a-8b1dd5cce344] Checking state _get_power_state 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:1088
2015-04-28 22:25:05.302 10017 DEBUG nova.virt.libvirt.vif [-] vif_type=ovs 
instance= 
vif=VIF({'ovs_interfaceid': u'61c90c3f-24fc-4a58-8f6f-a7caf485fe50', 'network': 
Network({'bridge': u'br-int', 'subnets': [Subnet({'ips': [FixedIP({'meta': {}, 
'version': 4, 'type': u'fixed', 'floating_ips': [], 'address': 
u'10.27.72.8'})], 'version': 4, 'meta': {u'dhcp_server': u'10.27.72.11'}, 
'dns': [IP({'meta': {}, 'version': 4, 'type': u'dns', 'address': 
u'10.26.10.1'}), IP({'meta': {}, 'version': 4, 'type': u'dns', 'address': 
u'10.27.21.2'})], 'routes': [], 'cidr': u'10.27.72.0/24', 'gateway': 
IP({'meta': {}, 'version': 4, 'type': u'gateway', 'address': 
u'10.27.72.1'})})], 'meta': {u'injected': False, u'tenant_id': 
u'6966cc471a354147901586eed21e4c4e'}, 'id': 
u'1e5a7f58-a380-4636-9857-4e707e608530', 'label': u'c2c-vlan72'}), 'devname': 
u'tap61c90c3f-24', 'qbh_params': None, 'meta': {}, 'details': {}, 'address': 
u'fa:16:3e:a4:d0:75', 'active
 ': True, 'type': u'ovs', 'id': u'61c90c3f-24fc-4a58-8f6f-a7caf485fe50', 
'qbg_params': None}) plug 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py:592
2015-04-28 22:25:05.306 10017 DEBUG nova.compute.manager [-] [instance: 
6c33e4b2-009b-49f3-8b7a-8b1dd5cce344] Checking state _get_power_state 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:1088
2015-04-28 22:25:05.313 10017 DEBUG nova.compute.manager [-] [instance: 
6c33e4b2-009b-49f3-8b7a-8b1dd5cce344] Current state is 1, state in DB is 1. 
_init_instance /usr/lib/python2.7/dist-packages/nova/compute/manager.py:966
2015-04-28 22:25:05.314 10017 DEBUG nova.compute.manager [-] [instance: 
08991862-8385-41fa-9ac8-b59dea1e8e61] Checking state _get_power_state 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:1088
2015-04-28 22:25:05.320 10017 DEBUG nova.compute.manager [-] [instance: 
08991862-8385-41fa-9ac8-b59dea1e8e61] Checking state _get_power_state 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:1088
2015-04-28 22:25:05.327 10017 DEBUG nova.virt.libvirt.vif [-] vif_type=None 
instance= 
vif=VIF({'ovs_interfaceid': None, 'network': Network({'bridge': None, 
'subnets': [Subnet({'ips': [FixedIP({'meta': {}, 'version': 4, 'type': 
u'fixed', 'floating_ips': [], 'address': u'10.27.71.16'})], 'version': 4, 
'meta': {u'dhcp_server': u'10.27.71.23'}, 'dns': [IP({'meta': {}, 'version': 4, 
'type': u'dns', 'address': u'10.26.10.1'}), IP({'meta': {}, 'version': 4, 
'type': u'dns', 'address': u'10.27.21.2'})], 'routes': [], 'cidr': 
u'10.27.71.0/24', 'gateway': IP({'meta': {}, 'version': 4, 'type': u'gateway', 
'address': u'10.27.71.1'})})], 'meta': {u'injected': False, u'tenant_id': 
u'445ed83efc894d11963d10be98d6c2ab'}, 'id': 
u'05bdb9ac-4cbc-467d-b355-269c4b6f9733', 'label': u'c2c-vlan71'}), 'devname': 
u'tape3e7ac16-33', 'qbh_params': None, 'meta': {}, 'details': {}, 'address': 
u'fa:16:3e:d8:b7:3d', 'active': T

[Yahoo-eng-team] [Bug 1441107] Re: Missing entry point for haproxy namespace driver

2015-04-28 Thread Thierry Carrez
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1441107

Title:
  Missing entry point for haproxy namespace driver

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron kilo series:
  Fix Released

Bug description:
  Using haproxy NSDriver for lbaas agent from neutron repo causes agent
  failure during start. The reason is that translation from neutron to
  neutron_lbaas in package path is missing in entry points.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1441107/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447883] Re: Restrict netmask of CIDR to avoid DHCP resync is not enough

2015-04-28 Thread Thierry Carrez
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1447883

Title:
  Restrict netmask of CIDR to avoid DHCP resync is not enough

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron kilo series:
  Fix Released
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  Restrict netmask of CIDR to avoid DHCP resync  is not enough.
  https://bugs.launchpad.net/neutron/+bug/1443798

  I'd like to prevent following case:

  [Condition]
- Plugin: ML2
- subnet with "enable_dhcp" is True

  [Operations]
  A. Specify "[]"(empty list) at "allocation_pools" when create/update-subnet
  ---
  $ $ curl -X POST -d '{"subnet": {"name": "test_subnet", "cidr": 
"192.168.200.0/24", "ip_version": 4, "network_id": 
"649c5531-338e-42b5-a2d1-4d49140deb02", "allocation_pools": []}}' -H 
"x-auth-token:$TOKEN" -H "content-type:application/json" 
http://127.0.0.1:9696/v2.0/subnets

  Then, the dhcp-agent creates own DHCP-port, it is reproduced resync
  bug.

  B. Create port and exhaust allocation_pools
  ---
  1. Create subnet with 192.168.1.0/24. And, DHCP-port has alteady created.
 gateway_ip: 192.168.1.1
 DHCP-port: 192.168.1.2
 allocation_pools{"start": 192.168.1.2, "end": 192.168.1.254}
 the number of availability ip_addresses is 252.

  2. Create non-dhcp port and exhaust ip_addresses in allocation_pools
 In this case, user creates a port 252 times.
 the number of availability ip_addresses is 0.

  3. User deletes the DHCP-port(192.168.1.2)
 the number of availability ip_addresses is 1.

  4. User creates a non-dhcp port.
 the number of availability ports are 0.
 Then, dhcp-agent tries to create DHCP-port. It is reproduced resync bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1447883/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1448813] Re: radvd running as neutron user in Kilo, attached network dead

2015-04-28 Thread Thierry Carrez
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1448813

Title:
  radvd running as neutron user in Kilo, attached network dead

Status in App Catalog:
  Confirmed
Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron kilo series:
  Fix Released

Bug description:
  Kilo RC1 release, mirantis Debian Jessie build

  Linux Kernel 3.19.3, ML2 vlan networking

  radvd version 1:1.9.1-1.3

  Network with IPv6 ULA SLAAC, IPv6 GUA SLAAC, Ipv4 RFC1918 configured.

  Radvd does not start, neutron-l3-agent does not set up OVS vlan
  forwarding between network and compute node, IPv4 completely
  disconnected as well. Looks like complete L2 breakage.

  Need to get this one fixed before release of Kilo.

  Work around:

  chown root:neutron /usr/sbin/radvd
  chmod 2750 /usr/sbin/radvd

  radvd gives message about not being able to create an IPv6 ICMP port
  in neutron-l3-agent log, just like when run as an non-root user.

  Notice radvd is not being executed via root wrap/sudo anymore, like
  all the other ip route/ip address/ip netns information gathering
  commands. Was executing in a privileged fashion missed in Neutron code
  refactor?

To manage notifications about this bug go to:
https://bugs.launchpad.net/app-catalog/+bug/1448813/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447039] Re: "No flavor with a name or ID of '' exists" reported when call "nova flavor-show {id}"

2015-04-28 Thread melanie witt
*** This bug is a duplicate of bug 1446850 ***
https://bugs.launchpad.net/bugs/1446850

Hi Jerry, this is a bug in the novaclient. A fix has been committed, but
not yet released. You can work around by using novaclient version
2.22.0 until the next release happens containing the fix.

** Also affects: python-novaclient
   Importance: Undecided
   Status: New

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1447039

Title:
  "No flavor with a name or ID of '' exists" reported when call "nova
  flavor-show {id}"

Status in Python client library for Nova:
  New

Bug description:
  Create flavor by the following method:
  flavors.create(name, memory, vcpus, root_gb,
     ephemeral_gb=ephemeral_gb,
     flavorid=flavorid, swap=swap,
     rxtx_factor=rxtx_factor,
     is_public=is_public)

  Created succeeded:
  # nova flavor-list
  
+--+---+---+--+---+--+---+-+---+
  | ID   | Name  | Memory_MB | Disk 
| Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
  
+--+---+---+--+---+--+---+-+---+
  | 1| m1.tiny   | 512   | 1
| 0 |  | 1 | 1.0 | True  |
  | 2| m1.small  | 2048  | 20   
| 0 |  | 1 | 1.0 | True  |
  | 3| m1.medium | 4096  | 40   
| 0 |  | 2 | 1.0 | True  |
  | 4| m1.large  | 8192  | 80   
| 0 |  | 4 | 1.0 | True  |
  | 5| m1.xlarge | 16384 | 160  
| 0 |  | 8 | 1.0 | True  |
  | TT-19e3e819-e68a-4ddf-ac2b-2f8636356603 | TT-Testnew2  | 1024  | 20   | 
0 |  | 1 | 1.0 | True  |
  
+--+---+---+--+---+--+---+-+---+

  But failed to show this:
  #nova flavor-show TT-19e3e819-e68a-4ddf-ac2b-2f8636356603
  ERROR (CommandError): No flavor with a name or ID of 
'pvc-19e3e819-e68a-4ddf-ac2b-2f8636356603' exists.

  This error also happend when boot vm, it shows cannot find the flavor, boot 
failed:
  #  nova boot --flavor TT-Testnew  --image RHEL6.5 --nic 
net-id=3ccfc448-ee59-4b36-94c1-57b76fa84b26,v4-fixed-ip=192.168.2.201 
sles-april21-jerry2
  ERROR (CommandError): No flavor with a name or ID of 
'tt-e835a885-e61c-48d7-af23-d4e5dbf04710' exists.

  I think this is a critical issue, please help, thanks.

  
  1. version:
  [root@controller ~]# rpm -aq | grep nova
  openstack-nova-conductor-2015.1-201504161438.ibm.el7.111.noarch
  openstack-nova-network-2015.1-201504161438.ibm.el7.111.noarch
  openstack-nova-cells-2015.1-201504161438.ibm.el7.111.noarch
  python-novaclient-2.23.0-1.ibm.el7.noarch
  openstack-nova-scheduler-2015.1-201504161438.ibm.el7.111.noarch
  openstack-nova-novncproxy-2015.1-201504161438.ibm.el7.111.noarch
  openstack-nova-objectstore-2015.1-201504161438.ibm.el7.111.noarch
  python-nova-2015.1-201504161438.ibm.el7.111.noarch
  openstack-nova-api-2015.1-201504161438.ibm.el7.111.noarch
  openstack-nova-console-2015.1-201504161438.ibm.el7.111.noarch
  openstack-nova-compute-prereqs-2013.1-201503192011.ibm.2.ppc64
  openstack-nova-compute-2015.1-201504161438.ibm.el7.111.noarch
  openstack-nova-common-2015.1-201504161438.ibm.el7.111.noarch
  openstack-nova-cert-2015.1-201504161438.ibm.el7.111.noarch
  openstack-nova-2015.1-201504161438.ibm.el7.111.noarch

  2. logs:
  only this in nova-api.log
  2015-04-22 04:24:03.051 28320 INFO nova.api.openstack.wsgi 
[req-09ce810d-fb83-4102-b137-42ea9d67c4dc 1cabd0c96f2c48599ca4220b9b9d3f8f 
2a0dc745a17f4b3b9c3b99b3df95084e - - -] HTTP exception thrown: The resource 
could not be found.

  3. 1) Create a flavor by the code above, then type the cmd as above.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-novaclient/+bug/1447039/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449712] [NEW] Cannot override template from InstanceMetadata

2015-04-28 Thread Mathieu Gagné
Public bug reported:

There is no way to override the template used by
netutils.get_injected_network_template within InstanceMetadata.

This could be useful for people implementing drivers which could require
a different network template based on flavor or individual instance
properties.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1449712

Title:
  Cannot override template from InstanceMetadata

Status in OpenStack Compute (Nova):
  New

Bug description:
  There is no way to override the template used by
  netutils.get_injected_network_template within InstanceMetadata.

  This could be useful for people implementing drivers which could
  require a different network template based on flavor or individual
  instance properties.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1449712/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444745] Re: Updates RPC version alias for kilo

2015-04-28 Thread Thierry Carrez
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
Milestone: None => kilo-rc3

** Changed in: nova/kilo
   Status: New => In Progress

** Changed in: nova/kilo
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1444745

Title:
  Updates RPC version alias for kilo

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  In Progress

Bug description:
  The rpc version alias need to be updated for kilo. Those alias is
  needed for rolling upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1444745/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447132] Re: nova-manage db migrate_flavor_data doesn't do instances not in instance_extra

2015-04-28 Thread Thierry Carrez
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
Milestone: None => kilo-rc3

** Changed in: nova/kilo
   Status: New => In Progress

** Changed in: nova/kilo
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1447132

Title:
  nova-manage db migrate_flavor_data doesn't do instances not in
  instance_extra

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  In Progress

Bug description:
  nova-manage db migrate_flavor_data selects all of the instances by
  joining them to the instance_extra table and then checks which ones
  have flavor information in the metadata table or the extras table.
  However, if an instance isn't in instance_extra (for example, it
  hasn't been written to since the creation of the extras table) then it
  won't be migrated (even if it isn't deleted AND has flavor info in the
  metadata table).

  migrate_flavor_data should select all of the instances in the metadata
  table with flavor information and migrate those.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1447132/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1448075] Re: Recent compute RPC API version bump missed out on security group parts of the api

2015-04-28 Thread Thierry Carrez
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
Milestone: None => kilo-rc3

** Changed in: nova/kilo
   Importance: Undecided => Critical

** Changed in: nova/kilo
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1448075

Title:
  Recent compute RPC API version bump missed out on security group parts
  of the api

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  In Progress

Bug description:
  Because compute and security group client side RPC API:s both share
  the same target, they need to be bumped together like what has been
  done previously in 6ac1a84614dc6611591cb1f1ec8cce737972d069 and
  6b238a5c9fcef0e62cefbaf3483645f51554667b.

  In fact, having two different client side RPC API:s for the same
  target is of little value and to avoid future mistakes should really
  be merged into one.

  The impact of this bug is that all security group related calls will
  start to fail in an upgrade scenario.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1448075/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449526] Re: When vlan id is out range of config_file, creating network still success.

2015-04-28 Thread Kevin Benton
I agree with James.  I don't think these were intended to apply to admin
operations.

If we do want to restrict admins from using VLANs outside of the ranges,
I think that should be configurable with a new option called something
like 'limit_admin_to_vlan_ranges'.

** Changed in: neutron
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1449526

Title:
  When vlan id is out range of config_file, creating network still
  success.

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  In neutron ml2 config file /etc/neutron/plugins/ml2/ml2_conf.ini,
  specify vlan range from 1002 to 1030,

  [ml2_type_vlan]
  network_vlan_ranges =physnet2:1002:1030

  When I specifying vlan id 1070 , the vlan id is out range of ml2
  config file. But the network creates success.

  In the bug fix, I will check the vlan segement id with config_file's
  vlan range.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1449526/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1243216] Re: iptables manipulation lacks functional testing

2015-04-28 Thread Miguel Angel Ajo
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1243216

Title:
  iptables manipulation lacks functional testing

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  The iptables manipulation in neutron/agent/linux/iptables_manager.py
  has unit tests but no tests of actual system interaction.  This bug is
  intended to umbrella patches that will add functional testing of such
  interaction over a representative portion of the module in question.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1243216/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1174404] Re: Unittests crashes in system has no ipv6 support

2015-04-28 Thread pritesh
** Changed in: neutron
   Status: In Progress => Invalid

** Changed in: neutron
 Assignee: pritesh (pritesh) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1174404

Title:
  Unittests crashes in system has no ipv6 support

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  I'm trying to run unittests and got errors trying to execute tests:

  ==
  ERROR: 
nova.tests.test_service.TestWSGIService.test_service_random_port_with_ipv6
  --
  _StringException: Empty attachments:
pythonlogging:'nova'

  Traceback (most recent call last):
File "/root/openstack/nova/app/nova/tests/test_service.py", line 189, in 
test_service_random_port_with_ipv6
  test_service = service.WSGIService("test_service")
File "/root/openstack/nova/app/nova/service.py", line 608, in __init__
  max_url_len=max_url_len)
File "/root/openstack/nova/app/nova/wsgi.py", line 120, in __init__
  self._socket = eventlet.listen(bind_addr, family, backlog=backlog)
File "/usr/lib/python2.6/site-packages/eventlet/convenience.py", line 35, 
in listen
  sock = socket.socket(family, socket.SOCK_STREAM)
File "/usr/lib/python2.6/site-packages/eventlet/greenio.py", line 116, in 
__init__
  fd = _original_socket(family_or_realsock, *args, **kwargs)
File "/usr/lib64/python2.6/socket.py", line 184, in __init__
  _sock = _realsocket(family, type, proto)
  error: [Errno 97] Address family not supported by protocol

  
  ==
  ERROR: nova.tests.test_wsgi.TestWSGIServer.test_start_random_port_with_ipv6
  --
  _StringException: Empty attachments:
pythonlogging:'nova'

  Traceback (most recent call last):
File "/root/openstack/nova/app/nova/tests/test_wsgi.py", line 106, in 
test_start_random_port_with_ipv6
  host="::1", port=0)
File "/root/openstack/nova/app/nova/wsgi.py", line 120, in __init__
  self._socket = eventlet.listen(bind_addr, family, backlog=backlog)
File "/usr/lib/python2.6/site-packages/eventlet/convenience.py", line 35, 
in listen
  sock = socket.socket(family, socket.SOCK_STREAM)
File "/usr/lib/python2.6/site-packages/eventlet/greenio.py", line 116, in 
__init__
  fd = _original_socket(family_or_realsock, *args, **kwargs)
File "/usr/lib64/python2.6/socket.py", line 184, in __init__
  _sock = _realsocket(family, type, proto)
  error: [Errno 97] Address family not supported by protocol

  
  ==
  ERROR: nova.tests.test_wsgi.TestWSGIServerWithSSL.test_app_using_ipv6_and_ssl
  --
  _StringException: Empty attachments:
pythonlogging:'nova'

  Traceback (most recent call last):
File "/root/openstack/nova/app/nova/tests/test_wsgi.py", line 212, in 
test_app_using_ipv6_and_ssl
  use_ssl=True)
File "/root/openstack/nova/app/nova/wsgi.py", line 120, in __init__
  self._socket = eventlet.listen(bind_addr, family, backlog=backlog)
File "/usr/lib/python2.6/site-packages/eventlet/convenience.py", line 35, 
in listen
  sock = socket.socket(family, socket.SOCK_STREAM)
File "/usr/lib/python2.6/site-packages/eventlet/greenio.py", line 116, in 
__init__
  fd = _original_socket(family_or_realsock, *args, **kwargs)
File "/usr/lib64/python2.6/socket.py", line 184, in __init__
  _sock = _realsocket(family, type, proto)
  error: [Errno 97] Address family not supported by protocol

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1174404/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449651] [NEW] Magic Search missing documentation / example

2015-04-28 Thread Aaron Sahlin
Public bug reported:

Magic Search widget is missing documentation on why client vs server facet 
distinction would be helpful to users.   
And even more importantly how / where is the 'isServer' property set?

** Affects: horizon
 Importance: Undecided
 Assignee: Aaron Sahlin (asahlin)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Aaron Sahlin (asahlin)

** Summary changed:

- Need example for indicating facet is server side
+ Magic Search missing documentation / example

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1449651

Title:
  Magic Search missing documentation / example

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Magic Search widget is missing documentation on why client vs server facet 
distinction would be helpful to users.   
  And even more importantly how / where is the 'isServer' property set?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1449651/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449648] [NEW] Keystone API JS Service uses wrong method

2015-04-28 Thread Matt Borland
Public bug reported:

The KeystoneAPI service (in JS) has a grantRole function that uses the
wrong method (delete instead of put) to grant the role.

** Affects: horizon
 Importance: Undecided
 Assignee: Matt Borland (palecrow)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Matt Borland (palecrow)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1449648

Title:
  Keystone API JS Service uses wrong method

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The KeystoneAPI service (in JS) has a grantRole function that uses the
  wrong method (delete instead of put) to grant the role.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1449648/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449639] Re: RBD: On image creation error, image is not deleted

2015-04-28 Thread Gorka Eguileor
** Description changed:

  When an exception rises while adding/creating an image, and the image
  has been created, this new image is not properly deleted.
  
  The fault lies in the `_delete_image` call of the Store.add method that
  is providing incorrect arguments.
+ 
+ This also affects Glance (Icehouse), since back then glance_store
+ functionality was included there.

** Also affects: glance
   Importance: Undecided
   Status: New

** Changed in: glance
 Assignee: (unassigned) => Gorka Eguileor (gorka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1449639

Title:
  RBD: On image creation error, image is not deleted

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Glance backend store-drivers library (glance_store):
  New

Bug description:
  When an exception rises while adding/creating an image, and the image
  has been created, this new image is not properly deleted.

  The fault lies in the `_delete_image` call of the Store.add method
  that is providing incorrect arguments.

  This also affects Glance (Icehouse), since back then glance_store
  functionality was included there.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1449639/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449615] Re: Something went wrong! Floating ip disassociate in Juno

2015-04-28 Thread smarta94
realized in wrong project when initially reported

** Project changed: openstack-cisco => horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1449615

Title:
  Something went wrong! Floating ip disassociate in Juno

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Detail Description :
  
  While trying to disassociate a floating ip (as both an admin and normal user) 
using the dashboard interface, an error message screen appears stating:
  "Something went wrong! An unexpected error has occurred. Try refreshing the 
page. If that doesn't help, contact your local administrator."

  This is using the Juno release of openstack. I have found the bug a
  couple releases ago searching for this issue but according to the
  reports the issue was supposively fixed and I haven't found a Juno
  related bug report on this yet. Using both nova and neutron command
  line options allows the disassociation of the floating ips with no
  issues, but users should be able to do the same in the GUI!

  Steps to reproduce issue:
  

  Create a new instance and floating ip. Associate the floating ip address to 
the instance. Attempt to dissassociate the ip -- click to confirm 
disassociating the floating ip. Instantly the message screen 
   "Something went wrong! An unexpected error has occurred. Try refreshing 
the page. If that doesn't help, contact your local 
 administrator." 
  appears. You can either click home, help, or back. Clicking back button 
brings up a waiting screen and does no good, home brings you to dashboard and 
the ip is still associated. 

  Other information:
  
  Openstack Juno release
  python-django-horizon  2014.2.1-2, version 1.6.5
  openstack-dashboard 2014,2.1-2

  Logs:
  =
  Logs do not seem to show any errors when this happens,  I will add them as 
replies.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1449615/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449615] [NEW] Something went wrong! Floating ip disassociate in Juno

2015-04-28 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Detail Description :

While trying to disassociate a floating ip (as both an admin and normal user) 
using the dashboard interface, an error message screen appears stating:
"Something went wrong! An unexpected error has occurred. Try refreshing the 
page. If that doesn't help, contact your local administrator."

This is using the Juno release of openstack. I have found the bug a
couple releases ago searching for this issue but according to the
reports the issue was supposively fixed and I haven't found a Juno
related bug report on this yet. Using both nova and neutron command line
options allows the disassociation of the floating ips with no issues,
but users should be able to do the same in the GUI!

Steps to reproduce issue:


Create a new instance and floating ip. Associate the floating ip address to the 
instance. Attempt to dissassociate the ip -- click to confirm disassociating 
the floating ip. Instantly the message screen 
 "Something went wrong! An unexpected error has occurred. Try refreshing 
the page. If that doesn't help, contact your local 
   administrator." 
appears. You can either click home, help, or back. Clicking back button brings 
up a waiting screen and does no good, home brings you to dashboard and the ip 
is still associated. 

Other information:

Openstack Juno release
python-django-horizon  2014.2.1-2, version 1.6.5
openstack-dashboard 2014,2.1-2

Logs:
=
Logs do not seem to show any errors when this happens,  I will add them as 
replies.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
Something went wrong! Floating ip disassociate in Juno
https://bugs.launchpad.net/bugs/1449615
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447989] Re: nova volume-detach volume failed due to arguments counts mismatch?

2015-04-28 Thread Matt Riedemann
How about not opening bugs against community nova for your busted out of
tree driver, thanks.

http://git.openstack.org/cgit/stackforge/powervc-driver/tree/cinder-
powervc/powervc/volume/driver/powervc.py#n312

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1447989

Title:
  nova volume-detach volume failed due to arguments counts mismatch?

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  nova attached succeeded but when detach, after debug the nova manager
  reports error:

  > 
/usr/lib/python2.7/site-packages/nova/compute/manager.py(4927)detach_volume()
  -> self.volume_api.detach(context.elevated(), volume_id)

  Start
  Invalid input received: The server could not comply with the request since it 
is either malformed or otherwise incorrect. (HTTP 400) (Request-ID: 
req-9a6619bc-01bd-4226-9904-45e1f571a5bb)
  End

  checked in /var/log/cinder/api.log, it reports:

  Start
  2015-04-24 04:18:06.461 27284 ERROR cinder.api.openstack.wsgi 
[req-9a6619bc-01bd-4226-9904-45e1f571a5bb 486b55b3d6254105b08f1a449777507d 
861703cfc7cf4f67a6a0049618b0865f - - -] Exception handling resource: 
detach_volume() takes exactly 3 arguments (4 given)
  Traceback (most recent call last):

File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", 
line 142, in _dispatch_and_reply
  executor_callback))

File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", 
line 186, in _dispatch
  executor_callback)

File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", 
line 130, in _do_dispatch
  result = func(ctxt, **new_args)

File "/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 105, 
in wrapper
  return f(*args, **kwargs)

File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 157, 
in ldo_inner1
  return ldo_inner2(inst, context, volume_id, attachment_id, **kwargs)

File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 
445, in inner
  return f(*args, **kwargs)

File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 156, 
in ldo_inner2
  return f(*_args, **_kwargs)

File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 897, 
in detach_volume
  {'attach_status': 'error_detaching'})

File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in 
__exit__
  six.reraise(self.type_, self.value, self.tb)

File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 892, 
in detach_volume
  self.driver.detach_volume(context, volume, attachment)

File "/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 105, 
in wrapper
  return f(*args, **kwargs)

  TypeError: detach_volume() takes exactly 3 arguments (4 given)
  End

  The result is my detach volume always in "detaching" status, although the 
real status is already detached.
  [root@controller ~]# cinder list
  | b1baa918-5a4f-40fc-82ab-63a9aade0efc | detaching |  test-for-attach-1   
|  1   | pvc:22ba67376ed3d811e4ba52c9d2fb5c:dev4clstr base template |  
false   |  

  I believe this is a bug that need to take care

  
  
  1. version:
  [root@controller ~]# rpm -aq | grep nova
  openstack-nova-conductor-2015.1-201504161438.ibm.el7.111.noarch
  openstack-nova-network-2015.1-201504161438.ibm.el7.111.noarch
  openstack-nova-cells-2015.1-201504161438.ibm.el7.111.noarch
  python-novaclient-2.23.0-1.ibm.el7.noarch
  openstack-nova-scheduler-2015.1-201504161438.ibm.el7.111.noarch
  openstack-nova-novncproxy-2015.1-201504161438.ibm.el7.111.noarch
  openstack-nova-objectstore-2015.1-201504161438.ibm.el7.111.noarch
  python-nova-2015.1-201504161438.ibm.el7.111.noarch
  openstack-nova-api-2015.1-201504161438.ibm.el7.111.noarch
  openstack-nova-console-2015.1-201504161438.ibm.el7.111.noarch
  openstack-nova-compute-prereqs-2013.1-201503192011.ibm.2.ppc64
  openstack-nova-compute-2015.1-201504161438.ibm.el7.111.noarch
  openstack-nova-common-2015.1-201504161438.ibm.el7.111.noarch
  openstack-nova-cert-2015.1-201504161438.ibm.el7.111.noarch
  openstack-nova-2015.1-201504161438.ibm.el7.111.noarch

  2. logs:
  see above

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1447989/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449611] [NEW] Attempting to update an image property or add an image tag with 4 byte unicode returns a 500, not a 400

2015-04-28 Thread Luke Wollney
Public bug reported:

Overview:
When attempting to reproduce an the following Launchpad bug 
(https://bugs.launchpad.net/glance/+bug/1370954), 4 byte unicode characters 
should no longer be allowed for image properties or image tags. Using the 
specific value of 'u'Černý'', a 500 response still appears to be returned 
instead of a 400 response as we should.  This could be a potential issue that 
regex didn't catch.

Steps to reproduce:
1) Import an image
2) Attempt to update the image name, passing in 4 byte unicode via:
curl -i /v2/images/ -X PATCH -H "Content-Type: 
application/openstack-images-v2.1-json-patch" -H "Accept: application/json" -H 
"X-Auth-Token: " -d '[{"path": "/name", "value": "u'Černý'", "op": 
"replace"}]'
3) Notice that a 500 response is returned
4) Attempt to add an image tag, passing in 4 byte unicode via:
curl -i /v2/images//tags/u'Černý' -X PUT -H "Content-Type: 
application/json" -H "Accept: application/json" -H "X-Auth-Token: "
5) Notice that a 500 response is returned

Expected:
A 400 response should be returned

Actual:
A 500 response is returned for both of these operations

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1449611

Title:
  Attempting to update an image property or add an image tag with 4 byte
  unicode returns a 500, not a 400

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Overview:
  When attempting to reproduce an the following Launchpad bug 
(https://bugs.launchpad.net/glance/+bug/1370954), 4 byte unicode characters 
should no longer be allowed for image properties or image tags. Using the 
specific value of 'u'Černý'', a 500 response still appears to be returned 
instead of a 400 response as we should.  This could be a potential issue that 
regex didn't catch.

  Steps to reproduce:
  1) Import an image
  2) Attempt to update the image name, passing in 4 byte unicode via:
  curl -i /v2/images/ -X PATCH -H "Content-Type: 
application/openstack-images-v2.1-json-patch" -H "Accept: application/json" -H 
"X-Auth-Token: " -d '[{"path": "/name", "value": "u'Černý'", "op": 
"replace"}]'
  3) Notice that a 500 response is returned
  4) Attempt to add an image tag, passing in 4 byte unicode via:
  curl -i /v2/images//tags/u'Černý' -X PUT -H "Content-Type: 
application/json" -H "Accept: application/json" -H "X-Auth-Token: "
  5) Notice that a 500 response is returned

  Expected:
  A 400 response should be returned

  Actual:
  A 500 response is returned for both of these operations

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1449611/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1188450] Re: Update floating ip should consider multi_host or network_host config when live migration

2015-04-28 Thread Timofey Durakov
** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1188450

Title:
  Update floating ip should consider multi_host or network_host config
  when live migration

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When migrate a instance, the method _post_live_migration will update
  host value of the floating ip associated with the instancd to dist
  host. see
  
https://github.com/openstack/nova/blob/stable/folsom/nova/compute/manager.py#L2313
  . This operation does not consider whether multi_host is false or
  there is a nova-network runing on the dist host when using nova-
  network as the network service.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1188450/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441576] Re: LoadBalancer not opening in horizon after network is deleted

2015-04-28 Thread Sudipta Biswas
I can confirm the bug - and it gets reproduced easily with the steps mentioned.
However, the problem seems to be in the horizon code and doesn't seem to be in 
neutron. Also all the logs you have pasted, doesn't seem to be relevant.

In the dashboard/api/lbass.py file:

def _vip_get(request, vip_id, expand_resource=False):
vip = neutronclient(request).show_vip(vip_id).get('vip')
if expand_resource:
vip['subnet'] = neutron.subnet_get(request, vip['subnet_id'])

The subnet get is what fails in this case but there seems to be no handler for 
that.
The error is correctly reported in the neutron server log as well:

show failed (client error): Subnet e898cfa9-87ab-481c-8ecd-79c0f1af2894
could not be found

As far as the lbaas warning is concerned - that sounds a bit legit:

def get_stats(self, pool_id):
socket_path = self._get_state_file_path(pool_id, 'sock', False)
TYPE_BACKEND_REQUEST = 2
TYPE_SERVER_REQUEST = 4
if os.path.exists(socket_path):
parsed_stats = self._get_stats_from_socket(
socket_path,
entity_type=TYPE_BACKEND_REQUEST | TYPE_SERVER_REQUEST)
pool_stats = self._get_backend_stats(parsed_stats)
pool_stats['members'] = self._get_servers_stats(parsed_stats)
return pool_stats
else:
LOG.warn(_('Stats socket not found for pool %s'), pool_id)
return {}


** Changed in: neutron
   Status: Incomplete => Invalid

** Tags added: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1441576

Title:
  LoadBalancer not opening in horizon after network is deleted

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Load Balancer fails to open in Horizon, after deleting the Assigned
  "Network"

  Steps:
  1)Create Network and Subnetwork
  2)Create pool in Load Balancer and assign a subnet
  3)Delete the Network assigned to the Pool
  4)load balancer failed to open from horizon.
 
  But load balancer opens after deleting the loadbalancer-pool from CLI.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1441576/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277104] Re: wrong order of assertEquals args

2015-04-28 Thread Sushil Kumar
** Also affects: python-troveclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1277104

Title:
  wrong order of assertEquals args

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Released
Status in Messaging API for OpenStack:
  Fix Released
Status in Oslo Policy:
  Fix Released
Status in Python client library for Ceilometer:
  In Progress
Status in Python client library for Glance:
  Fix Released
Status in Python client library for Ironic:
  Fix Released
Status in Python client library for Nova:
  Fix Released
Status in OpenStack Command Line Client:
  Fix Released
Status in Python client library for Sahara (ex. Savanna):
  Fix Released
Status in Python client library for Swift:
  In Progress
Status in Trove client binding:
  In Progress
Status in Rally:
  Confirmed
Status in Openstack Database (Trove):
  In Progress

Bug description:
  Args of assertEquals method in ceilometer.tests are arranged in wrong order. 
In result when test fails it shows incorrect information about observed and 
actual data. It's found more than 2000 times.
  Right order of arguments is "expected, actual".

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1277104/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449546] [NEW] Neutron-LB Health monitor association not listed in Horizon Dashboard

2015-04-28 Thread senthilmageswaran
Public bug reported:

In LB-Pool Horizon Dashboard, 
-->LB- Pool --> Edit Pool --> Associate Monitor,

it is expected that all the available health monitors to be listed. But
the List box is empty.

Please find the attached screen shot.

** Affects: neutron
 Importance: Undecided
 Assignee: senthilmageswaran (senthilmageswaran-muthusamy)
 Status: New

** Attachment added: "LB_monitor_list.JPG"
   
https://bugs.launchpad.net/bugs/1449546/+attachment/4386605/+files/LB_monitor_list.JPG

** Changed in: neutron
 Assignee: (unassigned) => senthilmageswaran (senthilmageswaran-muthusamy)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1449546

Title:
  Neutron-LB Health monitor association not listed in Horizon Dashboard

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In LB-Pool Horizon Dashboard, 
  -->LB- Pool --> Edit Pool --> Associate Monitor,

  it is expected that all the available health monitors to be listed.
  But the List box is empty.

  Please find the attached screen shot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1449546/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449544] [NEW] Neutron-LB Health monitor association mismatch in horizon and CLI

2015-04-28 Thread senthilmageswaran
Public bug reported:

When a new pool is created, all the health monitors that are available
are shown in LB-Pool Information in Horizon Dashboard.

But in CLI ,

neutron lb-pool-show   shows no monitors associated to the newly 
created Pool.
Please refer "LB_HM_default_assoc_UI" and "LB_HM_default_assoc_CLI".

Using CLI ,  associate any health monitor to the Pool, correct information will 
be displayed in Horizon Dashboard and CLI.
So only after creating new pool, Horizon Dashboard lists all the health 
monitors , which is wrong and this needs to be corrected.

** Affects: neutron
 Importance: Undecided
 Assignee: senthilmageswaran (senthilmageswaran-muthusamy)
 Status: New


** Tags: lbaas

** Attachment added: "LB_HM_default_assoc_CLI_and_UI.zip"
   
https://bugs.launchpad.net/bugs/1449544/+attachment/4386604/+files/LB_HM_default_assoc_CLI_and_UI.zip

** Changed in: neutron
 Assignee: (unassigned) => senthilmageswaran (senthilmageswaran-muthusamy)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1449544

Title:
  Neutron-LB Health monitor association mismatch in horizon and CLI

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When a new pool is created, all the health monitors that are available
  are shown in LB-Pool Information in Horizon Dashboard.

  But in CLI ,

  neutron lb-pool-show   shows no monitors associated to the newly 
created Pool.
  Please refer "LB_HM_default_assoc_UI" and "LB_HM_default_assoc_CLI".

  Using CLI ,  associate any health monitor to the Pool, correct information 
will be displayed in Horizon Dashboard and CLI.
  So only after creating new pool, Horizon Dashboard lists all the health 
monitors , which is wrong and this needs to be corrected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1449544/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449526] [NEW] When vlan id is out range of config_file, creating network still success.

2015-04-28 Thread Dongcan Ye
Public bug reported:

In neutron ml2 config file /etc/neutron/plugins/ml2/ml2_conf.ini,
specify vlan range from 1002 to 1030,

[ml2_type_vlan]
network_vlan_ranges =physnet2:1002:1030

When I specifying vlan id 1070 , the vlan id is out range of ml2 config
file. But the network creates success.

In the bug fix, I will check the vlan segement id with config_file's
vlan range.

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1449526

Title:
  When vlan id is out range of config_file, creating network still
  success.

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In neutron ml2 config file /etc/neutron/plugins/ml2/ml2_conf.ini,
  specify vlan range from 1002 to 1030,

  [ml2_type_vlan]
  network_vlan_ranges =physnet2:1002:1030

  When I specifying vlan id 1070 , the vlan id is out range of ml2
  config file. But the network creates success.

  In the bug fix, I will check the vlan segement id with config_file's
  vlan range.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1449526/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444421] Re: Launch instance fails with nova network

2015-04-28 Thread Matthias Runge
the error message is:
Recoverable error: Invalid service catalog service: network

Note, my keystone catalog does not contain a network service

** Changed in: horizon
   Status: Fix Committed => In Progress

** Changed in: horizon/kilo
   Status: Fix Released => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/121

Title:
  Launch instance fails with nova network

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Dashboard (Horizon) kilo series:
  In Progress

Bug description:
  Current Launch Instance (not angular launch instance)

  git checkout from kilo rc1:

  I have deployed as system with nova network instead of neutron.

  While trying to launch an instance, I'm getting:
  Internal Server Error: /project/instances/launch
  Traceback (most recent call last):
    File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 
164, in get_response
  response = response.render()
    File "/usr/lib/python2.7/site-packages/django/template/response.py", line 
158, in render
  self.content = self.rendered_content
    File "/usr/lib/python2.7/site-packages/django/template/response.py", line 
135, in rendered_content
  content = template.render(context, self._request)
    File "/usr/lib/python2.7/site-packages/django/template/backends/django.py", 
line 74, in render
  return self.template.render(context)
    File "/usr/lib/python2.7/site-packages/django/template/base.py", line 209, 
in render
  return self._render(context)
    File "/usr/lib/python2.7/site-packages/django/template/base.py", line 201, 
in _render
  return self.nodelist.render(context)
    File "/usr/lib/python2.7/site-packages/django/template/base.py", line 903, 
in render
  bit = self.render_node(node, context)
    File "/usr/lib/python2.7/site-packages/django/template/debug.py", line 79, 
in render_node
  return node.render(context)
    File "/usr/lib/python2.7/site-packages/django/template/defaulttags.py", 
line 576, in render
  return self.nodelist.render(context)
    File "/usr/lib/python2.7/site-packages/django/template/base.py", line 903, 
in render
  bit = self.render_node(node, context)
    File "/usr/lib/python2.7/site-packages/django/template/debug.py", line 79, 
in render_node
  return node.render(context)
    File "/usr/lib/python2.7/site-packages/django/template/loader_tags.py", 
line 56, in render
  result = self.nodelist.render(context)
    File "/usr/lib/python2.7/site-packages/django/template/base.py", line 903, 
in render
  bit = self.render_node(node, context)
    File "/usr/lib/python2.7/site-packages/django/template/debug.py", line 79, 
in render_node
  return node.render(context)
    File "/usr/lib/python2.7/site-packages/django/template/defaulttags.py", 
line 217, in render
  nodelist.append(node.render(context))
    File "/usr/lib/python2.7/site-packages/django/template/defaulttags.py", 
line 322, in render
  match = condition.eval(context)
    File "/usr/lib/python2.7/site-packages/django/template/defaulttags.py", 
line 933, in eval
  return self.value.resolve(context, ignore_failures=True)
    File "/usr/lib/python2.7/site-packages/django/template/base.py", line 647, 
in resolve
  obj = self.var.resolve(context)
    File "/usr/lib/python2.7/site-packages/django/template/base.py", line 787, 
in resolve
  value = self._resolve_lookup(context)
    File "/usr/lib/python2.7/site-packages/django/template/base.py", line 847, 
in _resolve_lookup
  current = current()
    File "/home/mrunge/work/horizon/horizon/workflows/base.py", line 439, in 
has_required_fields
  return any(field.required for field in self.action.fields.values())
    File "/home/mrunge/work/horizon/horizon/workflows/base.py", line 368, in 
action
  context)
    File 
"/home/mrunge/work/horizon/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py",
 line 707, in __init__
  super(SetNetworkAction, self).__init__(request, *args, **kwargs)
    File "/home/mrunge/work/horizon/horizon/workflows/base.py", line 138, in 
__init__
  self._populate_choices(request, context)
    File "/home/mrunge/work/horizon/horizon/workflows/base.py", line 151, in 
_populate_choices
  bound_field.choices = meth(request, context)
    File 
"/home/mrunge/work/horizon/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py",
 line 721, in populate_network_choices
  return instance_utils.network_field_data(request)
    File 
"/home/mrunge/work/horizon/openstack_dashboard/dashboards/project/instances/utils.py",
 line 97, in network_field_data
  if not networks:
  UnboundLocalError: local variable 'networks' referenced before assignment

  Fun fact is, this only occurs, when using admin credentials. with
  user, this doesn't happen.

  The error 

[Yahoo-eng-team] [Bug 1383465] Re: [pci-passthrough] nova-compute fails to start

2015-04-28 Thread Nikola Đipanov
*** This bug is a duplicate of bug 1415768 ***
https://bugs.launchpad.net/bugs/1415768

** This bug has been marked a duplicate of bug 1415768
   the pci deivce assigned to instance is inconsistent with DB record when 
restarting nova-compute

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1383465

Title:
  [pci-passthrough] nova-compute fails to start

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Created a guest using nova with a passthrough device, shutdown that
  guest, and disabled nova-compute (openstack-service stop). Went to
  turn things back on, and nova-compute fails to start.

  The trace:
  2014-10-20 16:06:45.734 48553 ERROR nova.openstack.common.threadgroup [-] PCI 
device request ({'requests': 
[InstancePCIRequest(alias_name='rook',count=2,is_new=False,request_id=None,spec=[{product_id='10fb',vendor_id='8086'}])],
 'code': 500}equests)s failed
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
Traceback (most recent call last):
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 
125, in wait
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
x.wait()
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 
47, in wait
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
return self.thread.wait()
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 173, in wait
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
return self._exit_event.wait()
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/eventlet/event.py", line 121, in wait
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
return hubs.get_hub().switch()
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 293, in switch
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
return self.greenlet.switch()
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 212, in main
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
result = function(*args, **kwargs)
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/service.py", line 492, 
in run_service
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
service.start()
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/service.py", line 181, in start
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
self.manager.pre_start_hook()
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1152, in 
pre_start_hook
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
self.update_available_resource(nova.context.get_admin_context())
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5949, in 
update_available_resource
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
rt.update_available_resource(context)
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 332, 
in update_available_resource
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
return self._update_available_resource(context, resources)
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py", line 
272, in inner
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
return f(*args, **kwargs)
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 349, 
in _update_available_resource
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
self._update_usage_from_instances(context, resources, instances)
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 708, 
in _update_usage_from_instances
  2014-10-20 16

[Yahoo-eng-team] [Bug 1449498] [NEW] the command “nova quota-show” Should not display the quota of a user has been deleted

2015-04-28 Thread jinquanni(ZTE)
Public bug reported:

1.version:icehouse
2.Reproduce steps:
(example)
I create a tenant called bug_test 
then I cretaed two users called test1 and test2 witch belong tenant bug_test
[root@njq002 ~(keystone_admin)]# keystone tenant-list
+--+--+-+
|id|   name   | enabled |
+--+--+-+
| 6485ffa6b1f448919f00acab23207018 |  admin   |   True  |
| c93d944ee63a45e0880161608e62eb83 | bug_test |   True  |
| dc09e51878b54448b6ed39522295bb5a | services |   True  |
+--+--+-+
[root@njq002 ~(keystone_admin)]# keystone user-list --tenant bug_test
+--+---+-+---+
|id|  name | enabled | email |
+--+---+-+---+
| 91c422e673ad4c399aace18ba5c4f049 | test1 |   True  |   |
| be4fb020ae8b4001afb1797d81a3d903 | test2 |   True  |   |
+--+---+-+---+

then I  use “nova quota-update” change user test2's tenant like this:

[root@njq002 ~(keystone_admin)]# nova quota-update 
c93d944ee63a45e0880161608e62eb83 --user be4fb020ae8b4001afb1797d81a3d903 
--instances 8
[root@njq002 ~(keystone_admin)]# nova quota-show  --tenant  
c93d944ee63a45e0880161608e62eb83 --user be4fb020ae8b4001afb1797d81a3d903
+-+---+
| Quota   | Limit |
+-+---+
| instances   | 8 |
| cores   | 20|
| ram | 51200 |
| floating_ips| 10|
| fixed_ips   | -1|
| metadata_items  | 128   |
| injected_files  | 5 |
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes| 255   |
| key_pairs   | 100   |
| security_groups | 10|
| security_group_rules| 20|
+-+---+

then i delete the user called test2

[root@njq002 ~(keystone_admin)]# keystone  user-delete  
be4fb020ae8b4001afb1797d81a3d903
[root@njq002 ~(keystone_admin)]# keystone user-list --tenant bug_test
+--+---+-+---+
|id|  name | enabled | email |
+--+---+-+---+
| 91c422e673ad4c399aace18ba5c4f049 | test1 |   True  |   |
+--+---+-+---+

Finally, I execute 
“nova quota-show  --tenant  c93d944ee63a45e0880161608e62eb83 --user 
be4fb020ae8b4001afb1797d81a3d903”

Expected result:

I think the system should be prompt something like:The user does not
exist

but the actual result:
[root@njq002 ~(keystone_admin)]# nova quota-show  --tenant  
c93d944ee63a45e0880161608e62eb83 --user be4fb020ae8b4001afb1797d81a3d903
+-+---+
| Quota   | Limit |
+-+---+
| instances   | 8 |
| cores   | 20|
| ram | 51200 |
| floating_ips| 10|
| fixed_ips   | -1|
| metadata_items  | 128   |
| injected_files  | 5 |
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes| 255   |
| key_pairs   | 100   |
| security_groups | 10|
| security_group_rules| 20|
+-+---+

I don't think this is very reasonable

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: nova quotas

** Tags added: nova quotas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1449498

Title:
  the command “nova quota-show” Should not display  the  quota of a user
  has been deleted

Status in OpenStack Compute (Nova):
  New

Bug description:
  1.version:icehouse
  2.Reproduce steps:
  (example)
  I create a tenant called bug_test 
  then I cretaed two users called test1 and test2 witch belong tenant bug_test
  [root@njq002 ~(keystone_admin)]# keystone tenant-list
  +--+--+-+
  |id|   name   | enabled |
  +--+--+-+
  | 6485ffa6b1f448919f00acab23207018 |  admin   |   True  |
  | c93d944ee63a45e0880161608e62eb83 | bug_test |   True  |
  | dc09e51878b54448b6ed39522295bb5a | services |   True  |
  +--+--+-+
  [root@njq002 ~(keystone_admin)]# keystone user-list --tenant bug_test
  +--+---+-+---+
  |id|  name | enabled | email |
  +--+---+-

[Yahoo-eng-team] [Bug 1277104] Re: wrong order of assertEquals args

2015-04-28 Thread Sushil Kumar
** Also affects: trove
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1277104

Title:
  wrong order of assertEquals args

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Released
Status in Messaging API for OpenStack:
  Fix Released
Status in Oslo Policy:
  Fix Released
Status in Python client library for Ceilometer:
  In Progress
Status in Python client library for Glance:
  Fix Released
Status in Python client library for Ironic:
  Fix Released
Status in Python client library for Nova:
  Fix Released
Status in OpenStack Command Line Client:
  Fix Released
Status in Python client library for Sahara (ex. Savanna):
  Fix Released
Status in Python client library for Swift:
  In Progress
Status in Rally:
  Confirmed
Status in Openstack Database (Trove):
  New

Bug description:
  Args of assertEquals method in ceilometer.tests are arranged in wrong order. 
In result when test fails it shows incorrect information about observed and 
actual data. It's found more than 2000 times.
  Right order of arguments is "expected, actual".

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1277104/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449462] [NEW] read_deleted in neutron context serves no purpose

2015-04-28 Thread Salvatore Orlando
Public bug reported:

According to the docstring at [1] I can specify read_delete=yes or only to see 
deleted records when performing queries.
However, Neutron does not perform soft deletes.
Also I kind of know a little Neutron's DB management layer and I'm pretty sure 
it never uses read_deleted anywhere.
As far as I remember no plugin makes use of it either.
According to git history this was added with an initial commit for Neutron 
context [2], which was probably more or less a cut & paste from nova.

It is worth removing that parameter before somebody actually and tries
to set it to 'yes' or 'only'

[1] http://git.openstack.org/cgit/openstack/neutron/tree/neutron/context.py#n44
[2] https://review.openstack.org/#/c/7952/

** Affects: neutron
 Importance: Low
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1449462

Title:
  read_deleted in neutron context serves no purpose

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  According to the docstring at [1] I can specify read_delete=yes or only to 
see deleted records when performing queries.
  However, Neutron does not perform soft deletes.
  Also I kind of know a little Neutron's DB management layer and I'm pretty 
sure it never uses read_deleted anywhere.
  As far as I remember no plugin makes use of it either.
  According to git history this was added with an initial commit for Neutron 
context [2], which was probably more or less a cut & paste from nova.

  It is worth removing that parameter before somebody actually and tries
  to set it to 'yes' or 'only'

  [1] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/context.py#n44
  [2] https://review.openstack.org/#/c/7952/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1449462/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447132] Re: nova-manage db migrate_flavor_data doesn't do instances not in instance_extra

2015-04-28 Thread Thierry Carrez
** No longer affects: nova/kilo

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1447132

Title:
  nova-manage db migrate_flavor_data doesn't do instances not in
  instance_extra

Status in OpenStack Compute (Nova):
  Fix Committed

Bug description:
  nova-manage db migrate_flavor_data selects all of the instances by
  joining them to the instance_extra table and then checks which ones
  have flavor information in the metadata table or the extras table.
  However, if an instance isn't in instance_extra (for example, it
  hasn't been written to since the creation of the extras table) then it
  won't be migrated (even if it isn't deleted AND has flavor info in the
  metadata table).

  migrate_flavor_data should select all of the instances in the metadata
  table with flavor information and migrate those.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1447132/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1448075] Re: Recent compute RPC API version bump missed out on security group parts of the api

2015-04-28 Thread Thierry Carrez
Gate is so backed-up it's unclear we have time to do a RC3 before
release, especially with this patch not being merged in master yet.

RC3 will be reconsidered once this gets in, depending on gate status at
that point.

** Tags added: kilo-rc-potential

** No longer affects: nova/kilo

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1448075

Title:
  Recent compute RPC API version bump missed out on security group parts
  of the api

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Because compute and security group client side RPC API:s both share
  the same target, they need to be bumped together like what has been
  done previously in 6ac1a84614dc6611591cb1f1ec8cce737972d069 and
  6b238a5c9fcef0e62cefbaf3483645f51554667b.

  In fact, having two different client side RPC API:s for the same
  target is of little value and to avoid future mistakes should really
  be merged into one.

  The impact of this bug is that all security group related calls will
  start to fail in an upgrade scenario.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1448075/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1448822] Re: let modal center

2015-04-28 Thread tinytmy
** Changed in: horizon
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1448822

Title:
  let modal center

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  The pop-up modal now is not center, we can let it center.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1448822/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449405] [NEW] Allow VPNaaS service provider and device driver to be configurable in devstack

2015-04-28 Thread venkata anil
Public bug reported:

Add devstack plugin for neutron-vpnaas like neutron-lbaas
https://github.com/openstack/neutron-lbaas/commit/facaaf9470efe06d305df59bc28cab1cfabd2fed

Add devstack scripts(devstack/plugin.sh and devstack/settings) in
neutron-vpnaas like in neutron-lbaas, which devstack uses to configure
VPNaaS service provider and device driver during devstack setup.

Looks like devstack won't allow any changes to it's repo for selecting
service provider/device driver settings for neutron advanced services.

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1449405

Title:
  Allow VPNaaS service provider and device driver  to be configurable in
  devstack

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Add devstack plugin for neutron-vpnaas like neutron-lbaas
  
https://github.com/openstack/neutron-lbaas/commit/facaaf9470efe06d305df59bc28cab1cfabd2fed

  Add devstack scripts(devstack/plugin.sh and devstack/settings) in
  neutron-vpnaas like in neutron-lbaas, which devstack uses to configure
  VPNaaS service provider and device driver during devstack setup.

  Looks like devstack won't allow any changes to it's repo for selecting
  service provider/device driver settings for neutron advanced services.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1449405/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447463] Re: glance.tests.functional.v2.test_images.TestImages.test_download_random_access failed

2015-04-28 Thread Pawel Koniszewski
It is no longer valid so marking as invalid.

** Changed in: glance
   Status: Confirmed => Won't Fix

** Changed in: glance
   Status: Won't Fix => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1447463

Title:
  glance.tests.functional.v2.test_images.TestImages.test_download_random_access
  failed

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  The error message is below.

  Traceback (most recent call last):
File "tools/colorizer.py", line 326, in 
  if runner.run(test).wasSuccessful():
File "/usr/lib/python2.7/unittest/runner.py", line 158, in run
  result.printErrors()
File "tools/colorizer.py", line 305, in printErrors
  self.printErrorList('FAIL', self.failures)
File "tools/colorizer.py", line 315, in printErrorList
  self.stream.writeln("%s" % err)
File "/usr/lib/python2.7/unittest/runner.py", line 24, in writeln
  self.write(arg)
  UnicodeEncodeError: 'ascii' codec can't encode characters in position 
600-602: ordinal not in range(128)

  There is get method from glance server.

  response = requests.get(path, headers=headers)

  The type of text in this response is unicode, which is
  '\x1f\x8b\x08\x00\x00\x00\x00\x00\x02\xff\x8b\x02\x00gW\xbcY\x01\x00\x00\x00'

  ascii codec can't encode this unicode type.

  This issue is also related other unit test like test_image_life_cycle.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1447463/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp