[Yahoo-eng-team] [Bug 1405082] [NEW] selected networks in launch instance workflow do not aligned well

2014-12-22 Thread Liyingjun
Public bug reported:

selected networks in launch instance workflow do not aligned well, see
attachment for detail

** Affects: horizon
 Importance: Undecided
 Assignee: Liyingjun (liyingjun)
 Status: New

** Attachment added: "Screen Shot 2014-12-23 at 3.18.16 PM.png"
   
https://bugs.launchpad.net/bugs/1405082/+attachment/4286178/+files/Screen%20Shot%202014-12-23%20at%203.18.16%20PM.png

** Changed in: horizon
 Assignee: (unassigned) => Liyingjun (liyingjun)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1405082

Title:
  selected networks in launch instance workflow do not aligned well

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  selected networks in launch instance workflow do not aligned well, see
  attachment for detail

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1405082/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405077] [NEW] VLAN configuration changes made is not updated until neutron server is restarted

2014-12-22 Thread Shivakumar M
Public bug reported:

I changed network_vlan_ranges configuration in configuration file, and I want 
changes to take effect without restarting the neutron server.
It may not be a bug, but restarting the networking service itself could lead to 
some critical processes to stop temporarily.

As some configurations are subjected to change often, automatic
reloading of configurations without restarting the whole service may be
a feasible solution.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: config neutron reload

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1405077

Title:
  VLAN configuration changes made is not updated until neutron server is
  restarted

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I changed network_vlan_ranges configuration in configuration file, and I want 
changes to take effect without restarting the neutron server.
  It may not be a bug, but restarting the networking service itself could lead 
to some critical processes to stop temporarily.

  As some configurations are subjected to change often, automatic
  reloading of configurations without restarting the whole service may
  be a feasible solution.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1405077/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405069] Re: nova client list not working when pass sort parameter

2014-12-22 Thread Eli Qiao
** Changed in: python-novaclient
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1405069

Title:
  nova client list not working when pass sort parameter

Status in OpenStack Compute (Nova):
  New
Status in Python client library for Nova:
  Invalid

Bug description:
  nova client send error request url to nova-api

  nova --debug list  --all-tenants --deleted --sort name

  
  ...
  DEBUG (session:162) REQ: curl -i -X GET 
http://cloudcontroller:8774/v2/d7beb7f28e0b4f41901215000339361d/servers/detail?all_tenants=1&deleted=True&sort_dir=desc&sort_key=name
 -H "User-Agent: python-novaclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}6ba6e70be8c8367deec4e1696758b7dc4a1a891a"
  ...

  ClientException: The server has either erred or is incapable of performing 
the requested operation. (HTTP 500) (Request-ID: 
req-c2a132da-05b0-4db3-84ee-1b8aeb7d0f61)
  ERROR (ClientException): The server has either erred or is incapable of 
performing the requested operation. (HTTP 500) (Request-ID: 
req-c2a132da-05b0-4db3-84ee-1b8aeb7d0f61)

  the url parameter should be passed as 'sort_keys' instead of
  'sort_key'

  beside, nova-api/nova should handle this exception instead of raise an
  internal error(500) this to customer.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1405069/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405069] [NEW] nova client list not working when pass sort parameter

2014-12-22 Thread Eli Qiao
Public bug reported:

nova client send error request url to nova-api

nova --debug list  --all-tenants --deleted --sort name


...
DEBUG (session:162) REQ: curl -i -X GET 
http://cloudcontroller:8774/v2/d7beb7f28e0b4f41901215000339361d/servers/detail?all_tenants=1&deleted=True&sort_dir=desc&sort_key=name
 -H "User-Agent: python-novaclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}6ba6e70be8c8367deec4e1696758b7dc4a1a891a"
...

ClientException: The server has either erred or is incapable of performing the 
requested operation. (HTTP 500) (Request-ID: 
req-c2a132da-05b0-4db3-84ee-1b8aeb7d0f61)
ERROR (ClientException): The server has either erred or is incapable of 
performing the requested operation. (HTTP 500) (Request-ID: 
req-c2a132da-05b0-4db3-84ee-1b8aeb7d0f61)

the url parameter should be passed as 'sort_keys' instead of 'sort_key'

beside, nova-api/nova should handle this exception instead of raise an
internal error(500) this to customer.

** Affects: nova
 Importance: Undecided
 Assignee: Eli Qiao (taget-9)
 Status: New

** Affects: python-novaclient
 Importance: Undecided
 Assignee: Eli Qiao (taget-9)
 Status: New

** Also affects: python-novaclient
   Importance: Undecided
   Status: New

** Changed in: python-novaclient
 Assignee: (unassigned) => Eli Qiao (taget-9)

** Changed in: nova
 Assignee: (unassigned) => Eli Qiao (taget-9)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1405069

Title:
  nova client list not working when pass sort parameter

Status in OpenStack Compute (Nova):
  New
Status in Python client library for Nova:
  New

Bug description:
  nova client send error request url to nova-api

  nova --debug list  --all-tenants --deleted --sort name

  
  ...
  DEBUG (session:162) REQ: curl -i -X GET 
http://cloudcontroller:8774/v2/d7beb7f28e0b4f41901215000339361d/servers/detail?all_tenants=1&deleted=True&sort_dir=desc&sort_key=name
 -H "User-Agent: python-novaclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}6ba6e70be8c8367deec4e1696758b7dc4a1a891a"
  ...

  ClientException: The server has either erred or is incapable of performing 
the requested operation. (HTTP 500) (Request-ID: 
req-c2a132da-05b0-4db3-84ee-1b8aeb7d0f61)
  ERROR (ClientException): The server has either erred or is incapable of 
performing the requested operation. (HTTP 500) (Request-ID: 
req-c2a132da-05b0-4db3-84ee-1b8aeb7d0f61)

  the url parameter should be passed as 'sort_keys' instead of
  'sort_key'

  beside, nova-api/nova should handle this exception instead of raise an
  internal error(500) this to customer.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1405069/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405063] [NEW] Flavor overview page does not exists

2014-12-22 Thread Sudheer Kalla
Public bug reported:

Horizon dashboard does not contain over view page for each flavor .
While CLI having support for flavor details by the command nova flavor-
show {id} i guess dahboard should also contain an option to view the
flavor details .

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1405063

Title:
  Flavor overview page does not exists

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Horizon dashboard does not contain over view page for each flavor .
  While CLI having support for flavor details by the command nova
  flavor-show {id} i guess dahboard should also contain an option to
  view the flavor details .

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1405063/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405057] [NEW] Filter port-list based on security_groups associated not working

2014-12-22 Thread Shivakumar M
Public bug reported:

Sample Usecases:

1. neutron port-list --security_groups=6f3d9d9d-e84d-437c-ac40-82ce3196230c
Invalid input for operation: '6' is not an integer or uuid.

2.neutron port-list --security_groups list=true 
6f3d9d9d-e84d-437c-ac40-82ce3196230c
Invalid input for operation: '6' is not an integer or uuid.

Since, security_groups associated to a port are referenced from securitygroups 
db table, we can just filter ports
based on security_groups directly as it works for other paramters.

Example:
neutron port-list --mac_address list=true fa:16:3e:40:2b:cc fa:16:3e:8e:32:3e
+--+--+---+---+
| id   | name | mac_address   | fixed_ips   
  |
+--+--+---+---+
| 1cecec78-226f-4379-b5ad-c145e2e14048 |  | fa:16:3e:40:2b:cc | 
{"subnet_id": "af938c1c-e2d7-47a0-954a-ec8524677486", "ip_address": 
"50.10.10.2"} |
| eec24494-09a8-4fa8-885d-e3fda37fe756 |  | fa:16:3e:8e:32:3e | 
{"subnet_id": "af938c1c-e2d7-47a0-954a-ec8524677486", "ip_address": 
"50.10.10.3"} |
+--+--+---+---+

** Affects: neutron
 Importance: Undecided
 Assignee: Shivakumar M (shiva-m)
 Status: New


** Tags: neutron port-list security-groups

** Changed in: neutron
 Assignee: (unassigned) => Shivakumar M (shiva-m)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1405057

Title:
  Filter port-list based on security_groups associated not working

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Sample Usecases:

  1. neutron port-list --security_groups=6f3d9d9d-e84d-437c-ac40-82ce3196230c
  Invalid input for operation: '6' is not an integer or uuid.

  2.neutron port-list --security_groups list=true 
6f3d9d9d-e84d-437c-ac40-82ce3196230c
  Invalid input for operation: '6' is not an integer or uuid.

  Since, security_groups associated to a port are referenced from 
securitygroups db table, we can just filter ports
  based on security_groups directly as it works for other paramters.

  Example:
  neutron port-list --mac_address list=true fa:16:3e:40:2b:cc fa:16:3e:8e:32:3e
  
+--+--+---+---+
  | id   | name | mac_address   | fixed_ips 
|
  
+--+--+---+---+
  | 1cecec78-226f-4379-b5ad-c145e2e14048 |  | fa:16:3e:40:2b:cc | 
{"subnet_id": "af938c1c-e2d7-47a0-954a-ec8524677486", "ip_address": 
"50.10.10.2"} |
  | eec24494-09a8-4fa8-885d-e3fda37fe756 |  | fa:16:3e:8e:32:3e | 
{"subnet_id": "af938c1c-e2d7-47a0-954a-ec8524677486", "ip_address": 
"50.10.10.3"} |
  
+--+--+---+---+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1405057/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405049] [NEW] Can't see the router in the network topology page, if neutron l3 agent HA is enabled.

2014-12-22 Thread Hong Hui Xiao
Public bug reported:

When I enabled the neutron l3 agent ha, by setting the properties in
neutron.conf, I create a router from horizon. But I can't see the router
from the "Network Topology" page.

But everything works fine, for example adding gateway, adding interface.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1405049

Title:
  Can't see the router in the network topology page, if neutron l3 agent
  HA is enabled.

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When I enabled the neutron l3 agent ha, by setting the properties in
  neutron.conf, I create a router from horizon. But I can't see the
  router from the "Network Topology" page.

  But everything works fine, for example adding gateway, adding
  interface.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1405049/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405044] [NEW] [GPFS] nova volume-attach a gpfs volume with an error log in nova-compute

2014-12-22 Thread Lan Qi song
Public bug reported:

When I attached a gpfs volume to an instance, the volume has been
successfully attached to the instance, but  there were some error logs
in nova-compute log file as below:

2014-12-22 21:52:10.863 13396 ERROR nova.openstack.common.threadgroup [-] 
Unexpected error while running command.
Command: sudo nova-rootwrap /etc/nova/rootwrap.conf blockdev --getsize64 
/gpfs/volume-98520c4e-935d-43d8-9c8d-00fcb54bb335
Exit code: 1
Stdout: u''
Stderr: u'BLKGETSIZE64: Inappropriate ioctl for device\n'
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup Traceback 
(most recent call last):
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py", line 
125, in wait
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup 
x.wait()
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py", line 
47, in wait
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup 
return self.thread.wait()
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/eventlet/greenthread.py", line 173, in wait
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup 
return self._exit_event.wait()
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/eventlet/event.py", line 121, in wait
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup 
return hubs.get_hub().switch()
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 293, in switch
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup 
return self.greenlet.switch()
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/eventlet/greenthread.py", line 212, in main
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup 
result = function(*args, **kwargs)
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/service.py", line 490, 
in run_service
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup 
service.start()
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/service.py", line 181, in start
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup 
self.manager.pre_start_hook()
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1159, in 
pre_start_hook
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup 
self.update_available_resource(nova.context.get_admin_context())
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 6037, in 
update_available_resource
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup 
nodenames = set(self.driver.get_available_nodes())
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/virt/driver.py", line 1237, in 
get_available_nodes
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup stats 
= self.get_host_stats(refresh=refresh)
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 5794, in 
get_host_stats
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup 
return self.host_state.get_host_stats(refresh=refresh)
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 473, in 
host_state
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup 
self._host_state = HostState(self)
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 6360, in 
__init__
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup 
self.update_status()
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 6411, in 
update_status
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup 
data['disk_available_least'] = _get_disk_available_least()
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 6384, in 
_get_disk_available_least
2014-12-22 21:52:10.863 13396 TRACE nova.openstack.common.t

[Yahoo-eng-team] [Bug 1405041] [NEW] test report a bug

2014-12-22 Thread dominic_chen
Public bug reported:

test

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1405041

Title:
  test report a bug

Status in OpenStack Identity (Keystone):
  New

Bug description:
  test

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1405041/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404491] Re: Cant boot from Image (create new Volume) for Windows ONLY

2014-12-22 Thread Joe Gordon
This looks like you have set something in nova.conf that is trying to
store the image somewhere it doesn't have permission: Permission denied:
'/root/WindowsServer2012R2_x64.iso'

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1404491

Title:
  Cant boot from Image (create new Volume) for Windows ONLY

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  ENVIRONMENT:
  I am using Icehouse and using external storage for cinder. 
  Network and Controller in one hypervisor - Ubuntu KVM
  Compute on another hypervisor - Ubuntu kvm
  GRE tunnel between compute and network node

  ISSUE:
  When I use boot from Image (create new Volume) option for windows 2012 R2  it 
fails. It did create the volume but then after it errors out. (Look for error 
in the log below)
  If I create a separate volume and attach is with Windows in 2 steps then it 
works fine
  boot from Image (create new Volume)  works fine with Centos also.
  Below are the log messages from compute nova-compute.log

  2014-12-20 00:04:18.346 15633 TRACE nova.openstack.common.periodic_task 
OSError:   [Errno 13] Permission denied: '/root/WindowsServer2012R2_x64.iso'
  2014-12-20 00:04:18.346 15633 TRACE nova.openstack.common.periodic_task
  2014-12-20 00:05:10.853 15633 AUDIT nova.compute.resource_tracker [-] 
Auditing l  ocally available compute resources
  2014-12-20 00:05:11.419 15633 ERROR nova.openstack.common.periodic_task [-] 
Erro  r during ComputeManager.update_available_resource: [Errno 13] 
Permission denied:   '/root/WindowsServer2012R2_x64.iso'
  2014-12-20 00:05:11.419 15633 TRACE nova.openstack.common.periodic_task 
Tracebac  k (most recent call last):
  2014-12-20 00:05:11.419 15633 TRACE nova.openstack.common.periodic_task   
File "  
/usr/lib/python2.7/dist-packages/nova/openstack/common/periodic_task.py", line 
1  82, in run_periodic_tasks
  2014-12-20 00:05:11.419 15633 TRACE nova.openstack.common.periodic_task 
task  (self, context)
  2014-12-20 00:05:11.419 15633 TRACE nova.openstack.common.periodic_task   
File "  /usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 
5444, in update_  available_resource
  2014-12-20 00:05:11.419 15633 TRACE nova.openstack.common.periodic_task 
rt.u  pdate_available_resource(context)
  2014-12-20 00:05:11.419 15633 TRACE nova.openstack.common.periodic_task   
File "  
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py", line 249, 
  in inner
  2014-12-20 00:05:11.419 15633 TRACE nova.openstack.common.periodic_task 
retu  rn f(*args, **kwargs)
  2014-12-20 00:05:11.419 15633 TRACE nova.openstack.common.periodic_task   
File "  /usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", 
line 293, in   update_available_resource
  2014-12-20 00:05:11.419 15633 TRACE nova.openstack.common.periodic_task 
reso  urces = self.driver.get_available_resource(self.nodename)
  2014-12-20 00:05:11.419 15633 TRACE nova.openstack.common.periodic_task   
File "  /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
4187, in get  _available_resource
  2014-12-20 00:05:11.419 15633 TRACE nova.openstack.common.periodic_task 
stat  s = self.get_host_stats(refresh=True)
  2014-12-20 00:05:11.419 15633 TRACE nova.openstack.common.periodic_task   
File "  /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
4862, in get  _host_stats
  2014-12-20 00:05:11.419 15633 TRACE nova.openstack.common.periodic_task 
retu  rn self.host_state.get_host_stats(refresh=refresh)
  2014-12-20 00:05:11.419 15633 TRACE nova.openstack.common.periodic_task   
File "  /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
5263, in get  _host_stats
  2014-12-20 00:05:11.419 15633 TRACE nova.openstack.common.periodic_task 
self  .update_status()
  2014-12-20 00:05:11.419 15633 TRACE nova.openstack.common.periodic_task   
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
5306  ate_status
  2014-12-20 00:05:11.419 15633 TRACE nova.openstack.common.periodic_task   
['disk_available_least'] = _get_disk_available_least()
  2014-12-20 00:05:11.419 15633 TRACE nova.openstack.common.periodic_task   
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
5279  t_disk_available_least
  2014-12-20 00:05:11.419 15633 TRACE nova.openstack.common.periodic_task   
_over_committed = (self.driver.
  2014-12-20 00:05:11.419 15633 TRACE nova.openstack.common.periodic_task   
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
4832  _disk_over_committed_size_total
  2014-12-20 00:05:11.419 15633 TRACE nova.openstack.common.peri

[Yahoo-eng-team] [Bug 1404241] Re: nova-compute state not updated

2014-12-22 Thread Joe Gordon
It sounds like there may be an issue in oslo.messaging with reconnecting

** Also affects: oslo.messaging
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1404241

Title:
  nova-compute state not updated

Status in OpenStack Compute (Nova):
  Incomplete
Status in Messaging API for OpenStack:
  New

Bug description:
  I'm running 2014.2.1 on CentOS7. 1 controller and 5 compute nodes are
  deployed using packstack.

  Whenever I reboot the controller node, some of nova-compute services
  report state=XXX even after 60 minutes after reboot completed and
  controller node is up and running again:

  [root@juno1 ~(keystone_admin)]# nova-manage service list
  Binary   Host Zone Status 
State Updated_At
  nova-consoleauth juno1internal 
enabled:-)   2014-12-19 13:17:48
  nova-scheduler   juno1internal 
enabled:-)   2014-12-19 13:17:47
  nova-conductor   juno1internal 
enabled:-)   2014-12-19 13:17:47
  nova-certjuno1internal 
enabled:-)   2014-12-19 13:17:48
  nova-compute juno4nova 
enabledXXX   2014-12-19 12:26:56
  nova-compute juno5nova 
enabled:-)   2014-12-19 13:17:47
  nova-compute juno6nova 
enabled:-)   2014-12-19 13:17:46
  nova-compute juno3nova 
enabled:-)   2014-12-19 13:17:46
  nova-compute juno2nova 
enabledXXX   2014-12-19 12:21:52

  Here is the chunk of nova-compute log from juno4:

  2014-12-19 15:46:02.082 5193 INFO oslo.messaging._drivers.impl_rabbit [-] 
Delaying reconnect for 1.0 seconds...
  2014-12-19 15:46:02.083 5193 ERROR oslo.messaging._drivers.impl_rabbit [-] 
Failed to consume message from queue: Socket closed
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit 
Traceback (most recent call last):
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/impl_rabbit.py", line 
655, in ensure
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit 
return method()
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/impl_rabbit.py", line 
735, in _consume
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit 
return self.connection.drain_events(timeout=timeout)
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/site-packages/kombu/connection.py", line 281, in 
drain_events
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit 
return self.transport.drain_events(self.connection, **kwargs)
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/site-packages/kombu/transport/pyamqp.py", line 94, in 
drain_events
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit 
return connection.drain_events(**kwargs)
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/site-packages/amqp/connection.py", line 299, in drain_events
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit 
chanmap, None, timeout=timeout,
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/site-packages/amqp/connection.py", line 362, in 
_wait_multiple
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit 
channel, method_sig, args, content = read_timeout(timeout)
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/site-packages/amqp/connection.py", line 333, in read_timeout
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit 
return self.method_reader.read_method()
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/site-packages/amqp/method_framing.py", line 189, in 
read_method
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit 
raise m
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit 
IOError: Socket closed
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit 
  2014-12-19 15:46:02.084 5193 INFO oslo.messaging._drivers.impl_rabbit [-] 
Delaying reconnect for 1.0 seconds...
  2014-12-19 15:46:03.084 5193 INFO oslo.m

[Yahoo-eng-team] [Bug 1405033] [NEW] Integration test: LoginPage doesn't support multiple regions

2014-12-22 Thread Wu Hong Guang
Public bug reported:

LoginPage doesn't work in a multiple regions deployment

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: integration-tests

** Summary changed:

- Integration test: login doesn't support Region
+ Integration test: LoginPage doesn't support multiple regions

** Tags added: integration-tests

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1405033

Title:
  Integration test: LoginPage doesn't support multiple regions

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  LoginPage doesn't work in a multiple regions deployment

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1405033/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404764] Re: share storage live migration fails at _post_live_migration() function, but the status of this instance is still "migrating".

2014-12-22 Thread Rong Han ZTE
It's invalid.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1404764

Title:
  share storage live migration  fails at _post_live_migration()
  function, but the status of this instance is still "migrating".

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  A share storage live-migration failed at function of "
  _post_live_migration() " because  umount command failed, but the
  status of this instance is still migrating.

  Log is as follows:

  2014-12-19 16:45:32.741 6127 INFO nova.compute.manager [-] [instance: 
e9fab51d-8e13-416b-b2c9-211e04ba35b2] _post_live_migration() is started..
  2014-12-19 16:45:32.779 6127 INFO urllib3.connectionpool [-] Starting new 
HTTP connection (1): 10.47.158.165
  2014-12-19 16:45:32.845 6127 INFO nova.compute.manager [-] [instance: 
e9fab51d-8e13-416b-b2c9-211e04ba35b2] During sync_power_state the instance has 
a pending task. Skip.
  2014-12-19 16:45:33.041 6127 ERROR nova.openstack.common.loopingcall [-] in 
fixed duration looping call
  2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall 
Traceback (most recent call last):
  2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/loopingcall.py", line 
78, in _inner
  2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall 
self.f(*self.args, **self.kw)
  2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4987, in 
wait_for_live_migration
  2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall 
migrate_data)
  2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped
  2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall 
payload)
  2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
  2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall 
six.reraise(self.type_, self.value, self.tb)
  2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped
  2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall 
return f(self, context, *args, **kw)
  2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 369, in 
decorated_function
  2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall e, 
sys.exc_info())
  2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
  2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall 
six.reraise(self.type_, self.value, self.tb)
  2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 356, in 
decorated_function
  2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall 
return function(self, context, *args, **kwargs)
  2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4826, in 
_post_live_migration
  2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall 
migrate_data)
  2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5185, in 
post_live_migration
  2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall 
self._umount_instance_sysdisk(instance)
  2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2214, in 
_umount_instance_sysdisk
  2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall 
utils.execute('umount', mount_path, run_as_root=True)
  2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/utils.py", line 165, in execute
  2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall 
return processutils.execute(*cmd, **kwargs)
  2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/processutils.py", line 
195, in execute
  2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall 
cmd=sanitized_cmd)
  2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall 
ProcessExecutionError: 

[Yahoo-eng-team] [Bug 1404037] Re: SimpleReadOnlySaharaClientTest.test_sahara_help fails in gate

2014-12-22 Thread Joe Gordon
Unless this keeps happening I think this is related to some new versions
of dependencies and this should be resolved already.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1404037

Title:
  SimpleReadOnlySaharaClientTest.test_sahara_help fails in gate

Status in OpenStack Compute (Nova):
  Invalid
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  New

Bug description:
  Fails on various gate jobs, example patch here:
  https://review.openstack.org/#/c/141931/  at Dec 18, 22:34 UTC

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1404037/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404848] Re: tempest.scenario.test_snapshot_pattern.TestSnapshotPattern.test_snapshot_pattern failing on Jenkins

2014-12-22 Thread Joe Gordon
Looks like a neutron error: http://logs.openstack.org/15/143315/3/check
/check-grenade-dsvm-
neutron/90ca1e9/logs/new/screen-n-api.txt.gz#_2014-12-22_10_19_30_242

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1404848

Title:
  
tempest.scenario.test_snapshot_pattern.TestSnapshotPattern.test_snapshot_pattern
  failing on Jenkins

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Incomplete

Bug description:
  
tempest.scenario.test_snapshot_pattern.TestSnapshotPattern.test_snapshot_pattern
  failing on Jenkins.

  Here's the traceback:

  2014-12-22 10:19:41.732 | 
  2014-12-22 10:19:41.732 | Traceback (most recent call last):
  2014-12-22 10:19:41.732 |   File "tempest/test.py", line 112, in wrapper
  2014-12-22 10:19:41.732 | return f(self, *func_args, **func_kwargs)
  2014-12-22 10:19:41.732 |   File 
"tempest/scenario/test_snapshot_pattern.py", line 69, in test_snapshot_pattern
  2014-12-22 10:19:41.732 | server = 
self._boot_image(CONF.compute.image_ref)
  2014-12-22 10:19:41.732 |   File 
"tempest/scenario/test_snapshot_pattern.py", line 45, in _boot_image
  2014-12-22 10:19:41.733 | return self.create_server(image=image_id, 
create_kwargs=create_kwargs)
  2014-12-22 10:19:41.733 |   File "tempest/scenario/manager.py", line 209, 
in create_server
  2014-12-22 10:19:41.733 | status='ACTIVE')
  2014-12-22 10:19:41.733 |   File 
"tempest/services/compute/json/servers_client.py", line 183, in 
wait_for_server_status
  2014-12-22 10:19:41.733 | ready_wait=ready_wait)
  2014-12-22 10:19:41.733 |   File "tempest/common/waiters.py", line 66, in 
wait_for_server_status
  2014-12-22 10:19:41.733 | resp, body = client.get_server(server_id)
  2014-12-22 10:19:41.733 |   File 
"tempest/services/compute/json/servers_client.py", line 142, in get_server
  2014-12-22 10:19:41.733 | resp, body = self.get("servers/%s" % 
str(server_id))
  2014-12-22 10:19:41.733 |   File "tempest/common/rest_client.py", line 
239, in get
  2014-12-22 10:19:41.733 | return self.request('GET', url, 
extra_headers, headers)
  2014-12-22 10:19:41.734 |   File "tempest/common/rest_client.py", line 
450, in request
  2014-12-22 10:19:41.734 | resp, resp_body)
  2014-12-22 10:19:41.734 |   File "tempest/common/rest_client.py", line 
547, in _error_checker
  2014-12-22 10:19:41.734 | raise exceptions.ServerFault(message)
  2014-12-22 10:19:41.734 | ServerFault: Got server fault
  2014-12-22 10:19:41.734 | Details: The server has either erred or is 
incapable of performing the requested operation.
  201

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1404848/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405007] [NEW] Do not reschedule a router when multiple external provider networks configured on a node with single l3 agent

2014-12-22 Thread Swaminathan Vasudevan
Public bug reported:

Neutron supports multiple external provider networks on a single L3
agent from Icehouse release.

But in the current neutron code, when multiple external provider
networks are configured on the node with a single L3 agent, the router-
update or router-gateway-set command with the second external network
will try to reschedule the router, even though there are only one l3
agent in the node.

** Affects: neutron
 Importance: Undecided
 Assignee: Swaminathan Vasudevan (swaminathan-vasudevan)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Swaminathan Vasudevan (swaminathan-vasudevan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1405007

Title:
  Do not reschedule a router when multiple external provider networks
  configured on a node with single l3 agent

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Neutron supports multiple external provider networks on a single L3
  agent from Icehouse release.

  But in the current neutron code, when multiple external provider
  networks are configured on the node with a single L3 agent, the
  router-update or router-gateway-set command with the second external
  network will try to reschedule the router, even though there are only
  one l3 agent in the node.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1405007/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342080] Re: glance api is tracebacking with "error: [Errno 32] Broken pipe"

2014-12-22 Thread Fei Long Wang
** Changed in: glance
   Importance: Undecided => Medium

** Project changed: glance => python-glanceclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1342080

Title:
  glance api is tracebacking with "error: [Errno 32] Broken pipe"

Status in Python client library for Glance:
  In Progress

Bug description:
  127.0.0.1 - - [15/Jul/2014 10:55:39] code 400, message Bad request syntax 
('0')
  127.0.0.1 - - [15/Jul/2014 10:55:39] "0" 400 -
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/eventlet/greenpool.py", line 80, in 
_spawn_n_impl
  func(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", line 584, in 
process_request
  proto.__init__(socket, address, self)
File "/usr/lib/python2.7/SocketServer.py", line 649, in __init__
  self.handle()
File "/usr/lib/python2.7/BaseHTTPServer.py", line 342, in handle
  self.handle_one_request()
File "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", line 247, in 
handle_one_request
  if not self.parse_request():
File "/usr/lib/python2.7/BaseHTTPServer.py", line 286, in parse_request
  self.send_error(400, "Bad request syntax (%r)" % requestline)
File "/usr/lib/python2.7/BaseHTTPServer.py", line 368, in send_error
  self.send_response(code, message)
File "/usr/lib/python2.7/BaseHTTPServer.py", line 395, in send_response
  self.send_header('Server', self.version_string())
File "/usr/lib/python2.7/BaseHTTPServer.py", line 401, in send_header
  self.wfile.write("%s: %s\r\n" % (keyword, value))
File "/usr/lib/python2.7/socket.py", line 324, in write
  self.flush()
File "/usr/lib/python2.7/socket.py", line 303, in flush
  self._sock.sendall(view[write_offset:write_offset+buffer_size])
File "/usr/lib/python2.7/dist-packages/eventlet/greenio.py", line 307, in 
sendall
  tail = self.send(data, flags)
File "/usr/lib/python2.7/dist-packages/eventlet/greenio.py", line 293, in 
send
  total_sent += fd.send(data[total_sent:], flags)
  error: [Errno 32] Broken pipe

  
  
http://logs.openstack.org/62/100162/3/check/check-tempest-dsvm-full/77badd4/logs/screen-g-api.txt.gz?level=INFO#_2014-07-15_10_55_39_729

  
  Seen all over the gate. Seeing stacktraces like thi

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-glanceclient/+bug/1342080/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1403136] Re: Create tenants, users, and roles in OpenStack Installation Guide for Ubuntu 14.04  - juno

2014-12-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/143519
Committed: 
https://git.openstack.org/cgit/openstack/openstack-manuals/commit/?id=549be4ba1d84ba749ea79c7a0d1e8953ef9d4cfd
Submitter: Jenkins
Branch:master

commit 549be4ba1d84ba749ea79c7a0d1e8953ef9d4cfd
Author: Matthew Kassawara 
Date:   Mon Dec 22 13:33:13 2014 -0600

Fix additional issue with _member_ role creation

I removed the '--tenant' option from the admin user/tenant
creation step because the latter needs only the admin role.
Also, I provided an explanation about automatic assignment
and/or creation of the _member_ role.

Change-Id: I036ae43b73c8ca469e04e8090e197d57a7a5f5d0
Closes-Bug: #1403136
backport: juno


** Changed in: openstack-manuals
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1403136

Title:
  Create tenants, users, and roles in OpenStack Installation Guide for
  Ubuntu 14.04  - juno

Status in OpenStack Identity (Keystone):
  In Progress
Status in OpenStack Manuals:
  Fix Released

Bug description:
  "e. By default, the dashboard limits access to users with the _member_
  role. Create the _member_ role:"

  The first sentence is true, but keystone will automatically create the
  _member_ role if it does not exist.

  I discovered this while tracking down an error:  "keystone user-
  create" resulted in a "duplicate entry" error. The sequence is like
  this:

  1) As described in the doc, I run "keystone role-create --name _member_". The 
role is created and assigned a random ID.
  2) On "user-create", keystone wants to assign the _member_ role to the new 
user. It looks up member_role_id in keystone.conf, finds none (the 
member_role_id does not match the ID from step 1)
  3) keystone now tries to create the _member_ role, but this fails since the 
name already exists.

  So by not creating the "_member_" role myself, the problem is averted.
  That's why I'm opening a bug against docs another fix would be for
  keystone to do the lookup by name instead, but I assume the keystone
  team has a good reason for not doing so.

  I'm using the v2 API with SQL backend.

  ---
  Built: 2014-12-09T01:28:32 00:00
  git SHA: 6d3c276487be990722bc423642ffb05217d77289
  URL: 
http://docs.openstack.org/juno/install-guide/install/apt/content/keystone-users.html
  source File: 
file:/home/jenkins/workspace/openstack-manuals-tox-doc-publishdocs/doc/install-guide/section_keystone-users.xml
  xml:id: keystone-users

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1403136/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1403136] Re: Create tenants, users, and roles in OpenStack Installation Guide for Ubuntu 14.04  - juno

2014-12-22 Thread Matt Kassawara
After further discussion with Dolph, I'm reopening this bug to address a
minor issue with the patch. The installation guide should only specify a
tenant (--tenant) during creation of the 'demo' user.

** Changed in: openstack-manuals
   Status: Fix Released => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1403136

Title:
  Create tenants, users, and roles in OpenStack Installation Guide for
  Ubuntu 14.04  - juno

Status in OpenStack Identity (Keystone):
  In Progress
Status in OpenStack Manuals:
  In Progress

Bug description:
  "e. By default, the dashboard limits access to users with the _member_
  role. Create the _member_ role:"

  The first sentence is true, but keystone will automatically create the
  _member_ role if it does not exist.

  I discovered this while tracking down an error:  "keystone user-
  create" resulted in a "duplicate entry" error. The sequence is like
  this:

  1) As described in the doc, I run "keystone role-create --name _member_". The 
role is created and assigned a random ID.
  2) On "user-create", keystone wants to assign the _member_ role to the new 
user. It looks up member_role_id in keystone.conf, finds none (the 
member_role_id does not match the ID from step 1)
  3) keystone now tries to create the _member_ role, but this fails since the 
name already exists.

  So by not creating the "_member_" role myself, the problem is averted.
  That's why I'm opening a bug against docs another fix would be for
  keystone to do the lookup by name instead, but I assume the keystone
  team has a good reason for not doing so.

  I'm using the v2 API with SQL backend.

  ---
  Built: 2014-12-09T01:28:32 00:00
  git SHA: 6d3c276487be990722bc423642ffb05217d77289
  URL: 
http://docs.openstack.org/juno/install-guide/install/apt/content/keystone-users.html
  source File: 
file:/home/jenkins/workspace/openstack-manuals-tox-doc-publishdocs/doc/install-guide/section_keystone-users.xml
  xml:id: keystone-users

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1403136/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404962] [NEW] openvswitch mech. driver does not report error in check_segment_for_agent

2014-12-22 Thread George Shuklin
Public bug reported:

When administrator misspells mappings for external flat networks, nova
fails with obscure trace during instance creation:

 Traceback (most recent call last):
   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2231, 
in _build_resources
 yield resources
   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2101, 
in _build_and_run_instance
 block_device_info=block_device_info)
   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
2619, in spawn
 write_to_disk=True)
   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
4150, in _get_guest_xml
 context)
   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
3936, in _get_guest_config
 flavor, CONF.libvirt.virt_type)
   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py", line 352, 
in get_config
 _("Unexpected vif_type=%s") % vif_type)
 NovaException: Unexpected vif_type=binding_failed


The real problem lies in neutron/plugins/ml2/drivers/mech_openvswitch.py:

network_type = segment[api.NETWORK_TYPE]
if network_type == 'local':
return True
elif network_type in tunnel_types:
return True
elif network_type in ['flat', 'vlan']:
return segment[api.PHYSICAL_NETWORK] in mappings
else:
return False

If network_type is 'flat' and segment[api.PHYSICAL_NETWORK] is not in
mappings it returns False,  this causes all other problems.

Proposal: add some kind of WARNING in this place to let the
administrator know that no matching mappings found.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1404962

Title:
  openvswitch mech. driver does not report error in
  check_segment_for_agent

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When administrator misspells mappings for external flat networks, nova
  fails with obscure trace during instance creation:

   Traceback (most recent call last):
 File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 
2231, in _build_resources
   yield resources
 File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 
2101, in _build_and_run_instance
   block_device_info=block_device_info)
 File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
2619, in spawn
   write_to_disk=True)
 File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
4150, in _get_guest_xml
   context)
 File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
3936, in _get_guest_config
   flavor, CONF.libvirt.virt_type)
 File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py", line 
352, in get_config
   _("Unexpected vif_type=%s") % vif_type)
   NovaException: Unexpected vif_type=binding_failed

  
  The real problem lies in neutron/plugins/ml2/drivers/mech_openvswitch.py:

  network_type = segment[api.NETWORK_TYPE]
  if network_type == 'local':
  return True
  elif network_type in tunnel_types:
  return True
  elif network_type in ['flat', 'vlan']:
  return segment[api.PHYSICAL_NETWORK] in mappings
  else:
  return False

  If network_type is 'flat' and segment[api.PHYSICAL_NETWORK] is not in
  mappings it returns False,  this causes all other problems.

  Proposal: add some kind of WARNING in this place to let the
  administrator know that no matching mappings found.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1404962/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404945] [NEW] Default gateway can vanish from HA routers, destroying external connectivity for all VMs on that network

2014-12-22 Thread Assaf Muller
Public bug reported:

The default gateway can vanish from the HA router namespace after
certain operations.

My setup:
Fedora 20
keepalived-1.2.13-1.fc20.x86_64
Network manager turned off.

I can reproduce this reliably on my system, but cannot reproduce this on
a RHEL 7 system. Even on that system, the issue manifests on its own, I
just can't reproduce it at will.

How I reproduce on my system:
Create an HA router
Set it as a gateway
Go to the master instance
Observe that the namespace has a default gateway
Add an internal interface (Make sure that the IP is 'lower' than the IP of the 
external interface, this is explained below)
Default gateway will no longer exist

Cause:
keepalived.conf has two sections for VIPs: virtual_ipaddress, and 
virtual_ipaddress_excluded. The difference is that any VIPs that go in the 
first section will be propagated on the wire, and any VIPs in the excluded 
section do not. Traditional configuration of keepalived places one VIP in the 
normal section, henceforth known as the 'primary VIP', and all other VIPs in 
the excluded section. Currently the keepalived manager does this by sorting the 
VIPs (Internal IPs, external SNAT IP, and all floating IPs), placing the lowest 
one (By string comparison) as the primary, and the rest of the VIPs in the 
excluded section: 
https://github.com/openstack/neutron/blob/master/neutron/agent/linux/keepalived.py#L155

That code is ran, and keepalived.conf is built when ever a router is
updated. This means that the primary VIP can change on router updates.
As it turns out, after a conversation with a keepalived developer,
keepalived assumes that the order does not change (This is possibly a
keepalived bug, depending on your view on life, the ordering of the
stars when keepalived is executed and the wind speed in the Falkland
Islands in the past leap year). On my system, with the currently
installed keepalived version, whenever the primary VIP changes, the
default gateway (Present in the virtual_routes section of
keepalived.conf) is violently removed.

Possible solution:
Make sure that the primary VIP never changes. For example: Fabricate an IP per 
HA router cluster (Derived from the VRID?), add it as a VIP on the HA device, 
configure it as the primary VIP. I played around with a hacky variation of this 
solution and I could no longer reproduce the issue.

** Affects: neutron
 Importance: Undecided
 Assignee: Assaf Muller (amuller)
 Status: New


** Tags: juno-backport-potential l3-ha

** Changed in: neutron
 Assignee: (unassigned) => Assaf Muller (amuller)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1404945

Title:
  Default gateway can vanish from HA routers, destroying external
  connectivity for all VMs on that network

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The default gateway can vanish from the HA router namespace after
  certain operations.

  My setup:
  Fedora 20
  keepalived-1.2.13-1.fc20.x86_64
  Network manager turned off.

  I can reproduce this reliably on my system, but cannot reproduce this
  on a RHEL 7 system. Even on that system, the issue manifests on its
  own, I just can't reproduce it at will.

  How I reproduce on my system:
  Create an HA router
  Set it as a gateway
  Go to the master instance
  Observe that the namespace has a default gateway
  Add an internal interface (Make sure that the IP is 'lower' than the IP of 
the external interface, this is explained below)
  Default gateway will no longer exist

  Cause:
  keepalived.conf has two sections for VIPs: virtual_ipaddress, and 
virtual_ipaddress_excluded. The difference is that any VIPs that go in the 
first section will be propagated on the wire, and any VIPs in the excluded 
section do not. Traditional configuration of keepalived places one VIP in the 
normal section, henceforth known as the 'primary VIP', and all other VIPs in 
the excluded section. Currently the keepalived manager does this by sorting the 
VIPs (Internal IPs, external SNAT IP, and all floating IPs), placing the lowest 
one (By string comparison) as the primary, and the rest of the VIPs in the 
excluded section: 
  
https://github.com/openstack/neutron/blob/master/neutron/agent/linux/keepalived.py#L155

  That code is ran, and keepalived.conf is built when ever a router is
  updated. This means that the primary VIP can change on router updates.
  As it turns out, after a conversation with a keepalived developer,
  keepalived assumes that the order does not change (This is possibly a
  keepalived bug, depending on your view on life, the ordering of the
  stars when keepalived is executed and the wind speed in the Falkland
  Islands in the past leap year). On my system, with the currently
  installed keepalived version, whenever the primary VIP changes, the
  default gateway (Present in the virtual_routes section of
  k

[Yahoo-eng-team] [Bug 1404943] [NEW] 'Error: Invalid service catalog service: volume' if no volume service is defined

2014-12-22 Thread George Shuklin
Public bug reported:

If openstack installation has no cinder service in endpoint list,
horizon reports 'Error: Invalid service catalog service: volume' many
times (after login, each time dialog for new instance is opened).

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1404943

Title:
  'Error: Invalid service catalog service: volume' if no volume service
  is defined

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If openstack installation has no cinder service in endpoint list,
  horizon reports 'Error: Invalid service catalog service: volume' many
  times (after login, each time dialog for new instance is opened).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1404943/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404937] [NEW] IpAddressGenerationFailure No more IP addresses

2014-12-22 Thread Bellantuono Daniel
Public bug reported:

Hi Guys,
When a user create a wrong network x.x.x.x/32 or subnet is full, neutron 
generate this error:

2014-12-22 06:31:48.970 6318 ERROR neutron.agent.dhcp_agent [-] Unable to 
enable dhcp for 67b05c84-7c53-4d11-89ad-afb0204f62d3.
2014-12-22 06:31:48.970 6318 TRACE neutron.agent.dhcp_agent RemoteError: Remote 
error: IpAddressGenerationFailure No more IP addresses available on network 
67b05c84-7c53-4d11-89ad-afb0204f62d3.

When this happens neutron restart all dnsmasq services and all VMs fails
to renew DHCP lease:

Dec 22 16:37:25 mariadb2 dhclient: DHCPREQUEST of 10.29.81.115 on eth0 to 
10.29.81.3 port 67 (xid=0x1997b37a)
Dec 22 16:37:25 mariadb2 dhclient: DHCPNAK from 10.29.81.3 (xid=0x1997b37a)
Dec 22 16:37:25 mariadb2 dhclient: DHCPDISCOVER on eth0 to 255.255.255.255 port 
67 interval 3 (xid=0x62a70cd4)
Dec 22 16:37:25 mariadb2 dhclient: DHCPREQUEST of 10.29.81.115 on eth0 to 
255.255.255.255 port 67 (xid=0x62a70cd4)
Dec 22 16:37:25 mariadb2 dhclient: DHCPOFFER of 10.29.81.115 from 10.29.81.3
Dec 22 16:37:25 mariadb2 dhclient: DHCPACK of 10.29.81.115 from 10.29.81.3
Dec 22 16:37:25 mariadb2 dhclient: bound to 10.29.81.115 -- renewal in 50 
seconds.

In an environment with many traffic this problem create a packet loss and the 
dnsmasq service are overworked
Do you have any idea to resolve that?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1404937

Title:
  IpAddressGenerationFailure No more IP addresses

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Hi Guys,
  When a user create a wrong network x.x.x.x/32 or subnet is full, neutron 
generate this error:

  2014-12-22 06:31:48.970 6318 ERROR neutron.agent.dhcp_agent [-] Unable to 
enable dhcp for 67b05c84-7c53-4d11-89ad-afb0204f62d3.
  2014-12-22 06:31:48.970 6318 TRACE neutron.agent.dhcp_agent RemoteError: 
Remote error: IpAddressGenerationFailure No more IP addresses available on 
network 67b05c84-7c53-4d11-89ad-afb0204f62d3.

  When this happens neutron restart all dnsmasq services and all VMs
  fails to renew DHCP lease:

  Dec 22 16:37:25 mariadb2 dhclient: DHCPREQUEST of 10.29.81.115 on eth0 to 
10.29.81.3 port 67 (xid=0x1997b37a)
  Dec 22 16:37:25 mariadb2 dhclient: DHCPNAK from 10.29.81.3 (xid=0x1997b37a)
  Dec 22 16:37:25 mariadb2 dhclient: DHCPDISCOVER on eth0 to 255.255.255.255 
port 67 interval 3 (xid=0x62a70cd4)
  Dec 22 16:37:25 mariadb2 dhclient: DHCPREQUEST of 10.29.81.115 on eth0 to 
255.255.255.255 port 67 (xid=0x62a70cd4)
  Dec 22 16:37:25 mariadb2 dhclient: DHCPOFFER of 10.29.81.115 from 10.29.81.3
  Dec 22 16:37:25 mariadb2 dhclient: DHCPACK of 10.29.81.115 from 10.29.81.3
  Dec 22 16:37:25 mariadb2 dhclient: bound to 10.29.81.115 -- renewal in 50 
seconds.

  In an environment with many traffic this problem create a packet loss and the 
dnsmasq service are overworked
  Do you have any idea to resolve that?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1404937/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404933] Re: 0.9.2 breaks glance migration 006

2014-12-22 Thread Dan Prince
** Also affects: glance
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1404933

Title:
  0.9.2 breaks glance migration 006

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in Database schema migration for SQLAlchemy:
  In Progress

Bug description:
  Just tried a Delorean package built against 0.9.2. It looks like
  93ae21007d0100332a5751fc58f7616ced775ef9 (SqlScript: execute multiple
  statements one by one) breaks a very old glance 006 migration.

  2014-12-22 15:56:08.059 0 ERROR migrate.versioning.script.sql [-] SQL 
script 
/usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_mysql_upgrade.sql
 failed: (OperationalError) (1065, 'Query was empty') '\n' ()
  2014-12-22 15:56:08.060 0 CRITICAL glance [-] OperationalError: 
(OperationalError) (1065, 'Query was empty') '\n' ()
  2014-12-22 15:56:08.060 0 TRACE glance Traceback (most recent call last):
  2014-12-22 15:56:08.060 0 TRACE glance   File "/usr/bin/glance-manage", 
line 10, in 
  2014-12-22 15:56:08.060 0 TRACE glance sys.exit(main())
  2014-12-22 15:56:08.060 0 TRACE glance   File 
"/usr/lib/python2.7/site-packages/glance/cmd/manage.py", line 281, in main
  2014-12-22 15:56:08.060 0 TRACE glance return CONF.command.action_fn()
  2014-12-22 15:56:08.060 0 TRACE glance   File 
"/usr/lib/python2.7/site-packages/glance/cmd/manage.py", line 157, in sync
  2014-12-22 15:56:08.060 0 TRACE glance CONF.command.current_version)
  2014-12-22 15:56:08.060 0 TRACE glance   File 
"/usr/lib/python2.7/site-packages/glance/cmd/manage.py", line 115, in sync
  2014-12-22 15:56:08.060 0 TRACE glance version)
  2014-12-22 15:56:08.060 0 TRACE glance   File 
"/usr/lib/python2.7/site-packages/oslo/db/sqlalchemy/migration.py", line 79, in 
db_sync
  2014-12-22 15:56:08.060 0 TRACE glance return 
versioning_api.upgrade(engine, repository, version)
  2014-12-22 15:56:08.060 0 TRACE glance   File 
"/usr/lib/python2.7/site-packages/migrate/versioning/api.py", line 186, in 
upgrade
  2014-12-22 15:56:08.060 0 TRACE glance return _migrate(url, 
repository, version, upgrade=True, err=err, **opts)
  2014-12-22 15:56:08.060 0 TRACE glance   File "", line 2, in 
_migrate
  2014-12-22 15:56:08.060 0 TRACE glance   File 
"/usr/lib/python2.7/site-packages/migrate/versioning/util/__init__.py", line 
160, in with_engine
  2014-12-22 15:56:08.060 0 TRACE glance return f(*a, **kw)
  2014-12-22 15:56:08.060 0 TRACE glance   File 
"/usr/lib/python2.7/site-packages/migrate/versioning/api.py", line 366, in 
_migrate
  2014-12-22 15:56:08.060 0 TRACE glance schema.runchange(ver, change, 
changeset.step)
  2014-12-22 15:56:08.060 0 TRACE glance   File 
"/usr/lib/python2.7/site-packages/migrate/versioning/schema.py", line 93, in 
runchange
  2014-12-22 15:56:08.060 0 TRACE glance change.run(self.engine, step)
  2014-12-22 15:56:08.060 0 TRACE glance   File 
"/usr/lib/python2.7/site-packages/migrate/versioning/script/sql.py", line 45, 
in run
  2014-12-22 15:56:08.060 0 TRACE glance conn.execute(statement)
  2014-12-22 15:56:08.060 0 TRACE glance   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 721, in 
execute
  2014-12-22 15:56:08.060 0 TRACE glance return 
self._execute_text(object, multiparams, params)
  2014-12-22 15:56:08.060 0 TRACE glance   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 870, in 
_execute_text
  2014-12-22 15:56:08.060 0 TRACE glance statement, parameters
  2014-12-22 15:56:08.060 0 TRACE glance   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 958, in 
_execute_context
  2014-12-22 15:56:08.060 0 TRACE glance context)
  2014-12-22 15:56:08.060 0 TRACE glance   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1156, in 
_handle_dbapi_exception
  2014-12-22 15:56:08.060 0 TRACE glance 
util.raise_from_cause(newraise, exc_info)
  2014-12-22 15:56:08.060 0 TRACE glance   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 199, in 
raise_from_cause
  2014-12-22 15:56:08.060 0 TRACE glance reraise(type(exception), 
exception, tb=exc_tb)
  2014-12-22 15:56:08.060 0 TRACE glance   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 951, in 
_execute_context
  2014-12-22 15:56:08.060 0 TRACE glance context)
  2014-12-22 15:56:08.060 0 TRACE glance   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 436, in 
do_execute
  2014-12-22 15:56:08.060 0 TRACE glance cursor.execute(statement, 
parameters)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/

[Yahoo-eng-team] [Bug 1404935] [NEW] ovs_lib unnecessarily loops through all Interfaces/Ports when calling 'ovs-vsctl list'

2014-12-22 Thread Terry Wilson
Public bug reported:

The ovs-vsctl 'list' command can take a list of records as an argument,
so there is no need to manually loop through all records discarding the
ones with names that don't match the bridge's port name list.

** Affects: neutron
 Importance: Undecided
 Assignee: Terry Wilson (otherwiseguy)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1404935

Title:
  ovs_lib unnecessarily loops through all Interfaces/Ports when calling
  'ovs-vsctl list'

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The ovs-vsctl 'list' command can take a list of records as an
  argument, so there is no need to manually loop through all records
  discarding the ones with names that don't match the bridge's port name
  list.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1404935/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404927] [NEW] ML2 Cisco Nexus MD: Incorrect mock return value

2014-12-22 Thread Robert Pothier
Public bug reported:


In the UT for ML2 Cisco Nexus MD, in the function test_ncclient_version_detect()
The value being passed into the mock is incorrect to mock the ncclient connect 
object.

** Affects: neutron
 Importance: Undecided
 Assignee: Robert Pothier (rpothier)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Robert Pothier (rpothier)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1404927

Title:
  ML2 Cisco Nexus MD: Incorrect mock return value

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  
  In the UT for ML2 Cisco Nexus MD, in the function 
test_ncclient_version_detect()
  The value being passed into the mock is incorrect to mock the ncclient 
connect object.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1404927/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401110] Re: Volume Block Device Mapping cannot be found

2014-12-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/140873
Committed: 
https://git.openstack.org/cgit/openstack/tempest/commit/?id=ab667960ef337538cf777bb0f325cb5d0e865d76
Submitter: Jenkins
Branch:master

commit ab667960ef337538cf777bb0f325cb5d0e865d76
Author: Mitsuhiro Tanino 
Date:   Wed Dec 10 15:52:08 2014 -0500

Actually attach a volume to an instance before taking snapshot

In the test test_snapshot_create_with_volume_in_use, the test calls Cinder
"os-attach" for attaching a volume. The "os-attach" to tell Cinder the
volume is attached, but the API doesn't actually attach the volume to an
instance.(Only update volume status in DB)

This is not right test case for taking a snapshot with in-use volume.
In this test, Nova "os-volume_attachment" should be called for volume
attachment.

Also, some Cinder drivers fails assisted snapshot due to this problem.
In order to perform the snapshot properly, this fix is needed.

Closes-Bug #1401110
Change-Id: Ib31e351fe7c3d27824241cf142c213eae287483f


** Changed in: tempest
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1401110

Title:
   Volume Block Device Mapping cannot be found

Status in OpenStack Compute (Nova):
  Confirmed
Status in Tempest:
  Fix Released

Bug description:
  In Short:
  Block Device Mapping cannot be found for Tempest online snapshot test on 
devstack with remote file system driver.

  More Detailed:
  Testing Environment is plain kilo head devstack with the included Tempest 
testsuite and a new remotefs based cinder driver.
  The test:
   
tempest.api.volume.test_volumes_snapshots.VolumesV2SnapshotTestJSON.test_snapshot_create_with_volume_in_use
  produces an error in Nova, stacktrace:

  2014-12-10 11:14:42.329 ^[[00;32mDEBUG nova.api.openstack.wsgi 
[^[[01;36mreq-5a0e0e46-2455-41ee-a36a-6c14de0d1627 
^[[00;36mVolumesV1SnapshotTestJSON-2078362258 
VolumesV1SnapshotTestJSON-1658640019^[[00;32m] ^[[01;35m^[[00;32mCalling method 
'>'^[[00m ^[[00;33mfrom (pid=12485) _process_stack 
/opt/stack/nova/nova/api/openstack/wsgi.py:963^[[00m^M
  2014-12-10 11:14:42.374 ^[[00;36mINFO nova.osapi_compute.wsgi.server 
[^[[01;36mreq-5a0e0e46-2455-41ee-a36a-6c14de0d1627 
^[[00;36mVolumesV1SnapshotTestJSON-2078362258 
VolumesV1SnapshotTestJSON-1658640019^[[00;36m] ^[[01;35m^[[00;36m192.168.122.67 
"GET 
/v2/e152b27ee9d641e6a57bec75ab29c924/servers/4fb9afd5-18a9-416e-b955-d161ebbc436f
 HTTP/1.1" status: 200 len: 1609 time: 0.0979590^[[00m^M
  2014-12-10 11:14:43.381 ^[[00;32mDEBUG nova.api.openstack.wsgi 
[^[[01;36mreq-02efe9e9-c7cb-46bf-8ab4-0b70a9b84f6d 
^[[00;36mVolumesV1SnapshotTestJSON-2078362258 
VolumesV1SnapshotTestJSON-1658640019^[[00;32m] ^[[01;35m^[[00;32mCalling method 
'>'^[[00m ^[[00;33mfrom (pid=12487) _process_stack 
/opt/stack/nova/nova/api/openstack/wsgi.py:963^[[00m^M
  2014-12-10 11:14:43.423 ^[[00;36mINFO nova.osapi_compute.wsgi.server 
[^[[01;36mreq-02efe9e9-c7cb-46bf-8ab4-0b70a9b84f6d 
^[[00;36mVolumesV1SnapshotTestJSON-2078362258 
VolumesV1SnapshotTestJSON-1658640019^[[00;36m] ^[[01;35m^[[00;36m192.168.122.67 
"GET 
/v2/e152b27ee9d641e6a57bec75ab29c924/servers/4fb9afd5-18a9-416e-b955-d161ebbc436f
 HTTP/1.1" status: 200 len: 1749 time: 0.0428371^[[00m^M
  2014-12-10 11:14:43.995 ^[[00;32mDEBUG nova.api.openstack.wsgi 
[^[[01;36mreq-023897db-cfd9-4259-b7c4-3621722a5eec 
^[[00;36mVolumesV2SnapshotTestJSON-1786381329 
VolumesV2SnapshotTestJSON-2048796393^[[00;32m] ^[[01;35m^[[00;32mAction: 
'create', calling method: >, body: {"snapshot": {"create_info": {"snapshot_id": 
"cf916b1a-e797-41e3-bde8-39c9615aa622", "type": "qcow2", "new_file": 
"volume-9f7ec09d-4ff2-4ef9-96dc-2bdc07471a65.cf916b1a-e797-41e3-bde8-39c9615aa622"},
 "volume_id": "9f7ec09d-4ff2-4ef9-96dc-2bdc07471a65"}}^[[00m ^[[00;33mfrom 
(pid=12487) _process_stack 
/opt/stack/nova/nova/api/openstack/wsgi.py:960^[[00m^M
  2014-12-10 11:14:43.995 ^[[01;36mAUDIT 
nova.api.openstack.compute.contrib.assisted_volume_snapshots 
[^[[01;36mreq-023897db-cfd9-4259-b7c4-3621722a5eec 
^[[00;36mVolumesV2SnapshotTestJSON-1786381329 
VolumesV2SnapshotTestJSON-2048796393^[[01;36m] ^[[01;35m^[[01;36mCreate 
assisted snapshot from volume 9f7ec09d-4ff2-4ef9-96dc-2bdc07471a65^[[00m^M
  2014-12-10 11:14:44.007 ^[[01;31mERROR nova.api.openstack 
[^[[01;36mreq-023897db-cfd9-4259-b7c4-3621722a5eec 
^[[00;36mVolumesV2SnapshotTestJSON-1786381329 
VolumesV2SnapshotTestJSON-2048796393^[[01;31m] ^[[01;35m^[[01;31mCaught error: 
No volume Block Device Mapping with id 
9f7ec09d-4ff2-4ef9-96dc-2bdc07471a65.^[[00m^M
  ^[[01;31m2014-12-10 11:14:44.007 TRACE nova.api.openstack 
^[[01;35m^[[00mTraceback (most recent call last):^M
  ^[[01;31m2014-12-10 11:14:44.007 TRACE nova.api.openstack ^[[01;35m^[[00m  
File "/opt/stack/nova/nova/api/openstack/__init__.py", line 125, in __call__^M
  ^[

[Yahoo-eng-team] [Bug 1394885] Re: Improve logging of OVS agent in order to log messaging exceptions

2014-12-22 Thread Eugene Nikanorov
It looks like the amount of logging is appropriate.
Marking as Invalid for now. Will reopen if needed.

** Changed in: neutron
   Status: New => Incomplete

** Changed in: neutron
   Status: Incomplete => Invalid

** Changed in: neutron
 Assignee: Eugene Nikanorov (enikanorov) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1394885

Title:
  Improve logging of OVS agent in order to log messaging exceptions

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  In some cases like when OVS agent fails to fetch data from the plugin
  via rpc, it raises DeviceListRetrievalError while swallowing original
  exception which might be helpful to troubleshoot the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1394885/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404888] [NEW] Metadata widget doesn't handle default values for numeric and boolean fields

2014-12-22 Thread Kamil Rykowski
Public bug reported:

Metadata widget doesn't handle well default values for metadefinition
properties from Glance which type is set to: integer, number and
boolean. For now on it works only with string properties.

To reproduce this issue follow these steps:
1. Go to the "metadef_properties" table located in Glance database.
2. Update "json_schema" field of any record (e.g. "os_shutdown_timeout") and 
put there "default" key with some integer value (e.g. 10).

Field "json_schema" before update:

{
"minimum": 0,
"type": "integer",
"description": "Some description.",
"title": "Shutdown timeout"
}

After update:
{
"minimum": 0,
"type": "integer",
"description": "Some description.",
"title": "Shutdown timeout",
"default: 10
}

3. Go to the Horizon /admin/images/ page and choose any image which doesn't 
have "os_shutdown_timeout" property defined yet.
4. Open "Update Metadata" form and pick the "Shutdown timeout" property. As you 
can see it has empty input, with no default value filled.

Same steps can be repeated for number and boolean properties to check
that no default value is shown to the user. Default value is prompted
only for string properties.

** Affects: horizon
 Importance: Undecided
 Assignee: Kamil Rykowski (kamil-rykowski)
 Status: In Progress


** Tags: metadef

** Changed in: horizon
 Assignee: (unassigned) => Kamil Rykowski (kamil-rykowski)

** Changed in: horizon
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1404888

Title:
  Metadata widget doesn't handle default values for numeric and boolean
  fields

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Metadata widget doesn't handle well default values for metadefinition
  properties from Glance which type is set to: integer, number and
  boolean. For now on it works only with string properties.

  To reproduce this issue follow these steps:
  1. Go to the "metadef_properties" table located in Glance database.
  2. Update "json_schema" field of any record (e.g. "os_shutdown_timeout") and 
put there "default" key with some integer value (e.g. 10).

  Field "json_schema" before update:

  {
  "minimum": 0,
  "type": "integer",
  "description": "Some description.",
  "title": "Shutdown timeout"
  }

  After update:
  {
  "minimum": 0,
  "type": "integer",
  "description": "Some description.",
  "title": "Shutdown timeout",
  "default: 10
  }

  3. Go to the Horizon /admin/images/ page and choose any image which doesn't 
have "os_shutdown_timeout" property defined yet.
  4. Open "Update Metadata" form and pick the "Shutdown timeout" property. As 
you can see it has empty input, with no default value filled.

  Same steps can be repeated for number and boolean properties to check
  that no default value is shown to the user. Default value is prompted
  only for string properties.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1404888/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1403424] Re: Fixed ip info shown for port even when dhcp is disabled

2014-12-22 Thread Pasquale Porreca
Maybe the solution I proposed is not the best one, but I still feel this is 
something that should be handled somewhere. 
This is an issue that I noticed in Horizon even before I started to give a look 
to any OpenStack code, in fact I was just using it as an user. 
As an user I assume that the IP address shown in the dashboard at the side of 
the instance name is actually assigned to that instance and not something 
useful for internal OpenStack use.


** Tags removed: ui
** Tags added: horizon

** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1403424

Title:
  Fixed ip info shown for port even when dhcp is disabled

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  As a user it is very confusing (especially in Horizon dashboard)
  having the IP address displayed even when the DHCP is disabled for the
  subnet the port belongs to: the user would expect that the IP address
  shown is actually assigned to the instance, but this is not the case,
  since the DHCP is disabled.

  I asked in the ML about this issue
  (http://lists.openstack.org/pipermail/openstack-
  dev/2014-December/053069.html) and I understood that neutron needs to
  reserve an IP address for a port even in the case it is not assigned,
  anyway I think this info should not be displayed or should be
  differently specified.

  In a first moment I thought about raising the bug against Horizon, but
  I feel the correct place to fix this is in neutron. Before to assign
  this bug to myself I would like to get some feedback by other
  developers, my idea for a possible solution is to add a boolean
  element "assigned" to "fixed_ip" dict with value False when in the
  subnet identified by "subnet_id"  DHCP is disabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1403424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1379761] Re: Asset compression does not happen unless debug mode is enabled

2014-12-22 Thread Julie Pichon
** Changed in: horizon
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1379761

Title:
  Asset compression does not happen unless debug mode is enabled

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in horizon package in Ubuntu:
  Fix Released
Status in python-django-pyscss package in Ubuntu:
  Invalid

Bug description:
  Juno/rc1 of OpenStack on utopic; the dashboard is unthemed and the
  compressed assets are missing unless DEBUG = True in local settings,
  at which point things look much better.

  ProblemType: Bug
  DistroRelease: Ubuntu 14.10
  Package: openstack-dashboard 1:2014.2~rc1-0ubuntu3 [modified: 
usr/share/openstack-dashboard/openstack_dashboard/enabled/_40_router.py]
  ProcVersionSignature: User Name 3.16.0-20.27-generic 3.16.3
  Uname: Linux 3.16.0-20-generic x86_64
  ApportVersion: 2.14.7-0ubuntu5
  Architecture: amd64
  Date: Fri Oct 10 11:26:21 2014
  Ec2AMI: ami-00af
  Ec2AMIManifest: FIXME
  Ec2AvailabilityZone: nova
  Ec2InstanceType: m1.small
  Ec2Kernel: aki-0002
  Ec2Ramdisk: ari-0002
  PackageArchitecture: all
  SourcePackage: horizon
  UpgradeStatus: No upgrade log present (probably fresh install)
  mtime.conffile..etc.apache2.conf.available.openstack.dashboard.conf: 
2014-10-10T11:25:49.335633
  mtime.conffile..etc.openstack.dashboard.local.settings.py: 
2014-10-10T11:25:49.307619

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1379761/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404867] [NEW] Volume remains in-use status, if instance booted from volume deleted when it is in the error state

2014-12-22 Thread Abhishek Kekane
Public bug reported:

If the instance is booted from volume and goes in to error state due to some 
reason.
Volume from which instance is booted, remains in-use state even the instance is 
deleted.
IMO, volume should be detached so that it can be used to boot other instance.

Steps to reproduce:

1. Log in to Horizon, create a new volume.
2. Create an Instance using newly created volume.
3. Verify instance is in active state.
$ source devstack/openrc demo demo
$ nova list
+--+--+++-+--+
| ID   | Name | Status | Task State | Power 
State | Networks |
+--+--+++-+--+
| dae3a13b-6aa8-4794-93cd-5ab7bf90f604 | nova | ACTIVE | -  | Running   
  | private=10.0.0.3 |
+--+--+++-+--+

Note:
Use shelve-unshelve api to see the instance goes into error state.
unshelving volumed back instance does not work and sets instance state to error 
state (ref: https://bugs.launchpad.net/nova/+bug/1404801)

4. Shelve the instance
$ nova shelve 

5. Verify the status is SHELVED_OFFLOADED.
$ nova list
+--+--+---++-+--+
| ID   | Name | Status| Task State 
| Power State | Networks |
+--+--+---++-+--+
| dae3a13b-6aa8-4794-93cd-5ab7bf90f604 | nova | SHELVED_OFFLOADED | -  
| Shutdown| private=10.0.0.3 |
+--+--+---++-+--+

6. Unshelve the instance.
$ nova unshelve 

5. Verify the instance is in Error state.
$ nova list
+--+--+---++-+--+
| ID   | Name | Status| Task State 
| Power State | Networks |
+--+--+---++-+--+
| dae3a13b-6aa8-4794-93cd-5ab7bf90f604 | nova | Error | unshelving 
| Spawning| private=10.0.0.3 |
+--+--+---++-+--+

6. Delete the instance using Horizon.

7. Verify that volume still in in-use state
$ cinder list
+--++--+--+-+--+--+
|  ID  | Status | Name | Size | Volume Type | 
Bootable | Attached to  |
+--++--+--+-+--+--+
| 4aeefd25-10aa-42c2-9a2d-1c89a95b4d4f | in-use | test |  1   | lvmdriver-1 |   
true   | 8f7bdc24-1891-4bbb-8f0c-732b9cbecae7 |
+--++--+--+-+--+--+

8. In Horizon, volume "Attached To" information is displayed as
"Attached to None on /dev/vda".

9. User is not able to delete this volume, or attached it to another
instance as it is still in use.

** Affects: nova
 Importance: Undecided
 Assignee: Abhishek Kekane (abhishek-kekane)
 Status: New


** Tags: ntt

** Description changed:

  If the instance is booted from volume and goes in to error state due to some 
reason.
  Volume from which instance is booted, remains in-use state even the instance 
is deleted.
- IMO, volume should be detached so that it can be used to boot other instance. 
+ IMO, volume should be detached so that it can be used to boot other instance.
  
  Steps to reproduce:
  
  1. Log in to Horizon, create a new volume.
  2. Create an Instance using newly created volume.
  3. Verify instance is in active state.
  $ source devstack/openrc demo demo
  $ nova list
  
+--+--+++-+--+
  | ID   | Name | Status | Task State | Power 
State | Networks |
  
+--+--+++-+--+
  | dae3a13b-6aa8-4794-93cd-5ab7bf90f604 | nova | ACTIVE | -  | Running 
| private=10.0.0.3 |
  
+--+--+++-+--+
  
  Note:
  Use shelve-unshelve api to see the instance goes into error state.
- unshelving volume back instance does not work and sets instance state to 
error (ref: https://bugs.launchpad.net/nova/+bug/1404801)
+ unshelving volumed back instance does not work and sets

[Yahoo-eng-team] [Bug 1391694] Re: Warning message about missing policy.d folder during Sahara start

2014-12-22 Thread Thierry Carrez
** Changed in: oslo-incubator
   Status: Fix Committed => Fix Released

** Changed in: oslo-incubator
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1391694

Title:
  Warning message about missing policy.d folder during Sahara start

Status in OpenStack Compute (Nova):
  In Progress
Status in The Oslo library incubator:
  Fix Released
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  Confirmed

Bug description:
  2014-11-11 16:14:05.786 403 WARNING sahara.openstack.common.policy [-]
  Can not find policy directories policy.d

  Example: https://sahara.mirantis.com/logs/31/133131/2/check/gate-
  sahara-integration-vanilla-1/9ca6d41/console.html

  Policy library from oslo searches for policy in directories specified
  by 'policy_dirs' parameter and warns if directory doesn't exist.
  Default value is ['policy.d'].

  Need to check what other projects do about this. I have never seen
  such warnings in other openstack projects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1391694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1403958] Re: api_extensions_path duplicates cause problems

2014-12-22 Thread Eugene Nikanorov
IMO that is configuration problem and it doesn't make much sense to fix
it.

** Changed in: neutron
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1403958

Title:
  api_extensions_path duplicates cause problems

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  When creating api extensions for neutron you need to add the path to
  the extensions to the api_extensions_path param in
  /etc/neutron/neutron.conf.

  The __path__ of neutron.extensions is appended to that, so if one user
  if not careful and adds the __path__ of neutron.extensions to the list
  himself, all the extensions in that path will be imported twice.

  When some extensions are loaded twice errors will occur. For example,
  when the L3 extension is loaded twice, super(L3, self) will crash with
  the following error: TypeError: super(type, obj): obj must be an
  instance or subtype of type. That happens because id(L3) changes when
  the 'l3.py' file is imported the second time and super checks if
  isinstance(self, L3), which will return False.

  To reproduce this bug set api_extensions_path to the __path__ of
  neutron.extensions and restart neutron-server. Check
  /var/log/neutron/server.log for the trace.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1403958/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1233707] Re: neutron http policy check broken

2014-12-22 Thread Thierry Carrez
** Changed in: oslo-incubator
   Status: Fix Committed => Fix Released

** Changed in: oslo-incubator
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1233707

Title:
  neutron http policy check broken

Status in OpenStack Neutron (virtual network service):
  In Progress
Status in The Oslo library incubator:
  Fix Released

Bug description:
  Neutron in theory should support HttpCheck as a policy element:

  
https://github.com/openstack/neutron/blob/master/neutron/openstack/common/policy.py#L747

  So I ran a little http server on localhost and added this line to the 
policy.json file:
  "create_network": "http://127.0.0.1:8080/ or rule:default",

  It turns out the http post never made it to the http server.

  Here, the code is trying to populate a json string with variable target:
  
https://github.com/openstack/neutron/blob/master/neutron/openstack/common/policy.py#L757

  And in execution, we have:

  2013-10-01 14:22:32.092 ERROR neutron.openstack.common.policy [-] 
target={'router:external': , u'name': u'net1', 
'provider:physical_network': , 
u'admin_state_up': True, 'tenant_id': u'881d9a4a7c4a486b94fae690e6d613fb', 
'provider:network_type': , 'shared': False, 
'provider:segmentation_id': }
  creds={'user_id': u'0495af214c2c4bdd99fadb7a7c69630e', 'roles': [u'admin'], 
'tenant_id': u'881d9a4a7c4a486b94fae690e6d613fb', 'is_admin': True, 
'timestamp': '2013-10-01 14:22:32.079282', 'project_id': 
u'881d9a4a7c4a486b94fae690e6d613fb', 'read_deleted': 'no'}
  url=http://127.0.0.1:8080/{'router:external': , u'name': u'net1', 'provider:physical_network': , u'admin_state_up': True, 'tenant_id': 
u'881d9a4a7c4a486b94fae690e6d613fb', 'provider:network_type': , 'shared': False, 'provider:segmentation_id': }
  2013-10-01 14:22:32.092 TRACE neutron.openstack.common.policy Traceback (most 
recent call last):
  2013-10-01 14:22:32.092 TRACE neutron.openstack.common.policy   File 
"/usr/lib/python2.7/dist-packages/routes/middleware.py", line 52, in __call__
  2013-10-01 14:22:32.092 TRACE neutron.openstack.common.policy qs = 
environ['QUERY_STRING']
  2013-10-01 14:22:32.092 TRACE neutron.openstack.common.policy KeyError: 
'QUERY_STRING'
  2013-10-01 14:22:32.092 TRACE neutron.openstack.common.policy
  2013-10-01 14:22:32.092 ERROR neutron.api.v2.resource [-] create failed
  2013-10-01 14:22:32.092 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
  2013-10-01 14:22:32.092 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 84, in resource
  2013-10-01 14:22:32.092 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2013-10-01 14:22:32.092 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 357, in create
  2013-10-01 14:22:32.092 TRACE neutron.api.v2.resource 
item[self._resource])
  2013-10-01 14:22:32.092 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/policy.py", line 379, in enforce
  2013-10-01 14:22:32.092 TRACE neutron.api.v2.resource 
exc=exceptions.PolicyNotAuthorized, action=action)
  2013-10-01 14:22:32.092 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/openstack/common/policy.py", line 169, in check
  2013-10-01 14:22:32.092 TRACE neutron.api.v2.resource result = 
rule(target, creds)
  2013-10-01 14:22:32.092 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/openstack/common/policy.py", line 732, in __call__
  2013-10-01 14:22:32.092 TRACE neutron.api.v2.resource return 
_rules[self.match](target, creds)
  2013-10-01 14:22:32.092 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/openstack/common/policy.py", line 366, in __call__
  2013-10-01 14:22:32.092 TRACE neutron.api.v2.resource if rule(target, 
cred):
  2013-10-01 14:22:32.092 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/openstack/common/policy.py", line 758, in __call__
  2013-10-01 14:22:32.092 TRACE neutron.api.v2.resource data = {'target': 
jsonutils.dumps(target),
  2013-10-01 14:22:32.092 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/openstack/common/jsonutils.py", line 151, in dumps
  2013-10-01 14:22:32.092 TRACE neutron.api.v2.resource return 
json.dumps(value, default=default, **kwargs)
  2013-10-01 14:22:32.092 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/json/__init__.py", line 238, in dumps
  2013-10-01 14:22:32.092 TRACE neutron.api.v2.resource **kw).encode(obj)
  2013-10-01 14:22:32.092 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/json/encoder.py", line 200, in encode
  2013-10-01 14:22:32.092 TRACE neutron.api.v2.resource chunks = 
self.iterencode(o, _one_shot=True)
  2013-10-01 14:22:32.092 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/json/encoder.py", line 263, in iterencode
  2013-10

[Yahoo-eng-team] [Bug 1371701] Re: delete middleware module files

2014-12-22 Thread Thierry Carrez
** Changed in: oslo-incubator
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1371701

Title:
  delete middleware module files

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in The Oslo library incubator:
  Fix Released

Bug description:
  remove files part of oslo.middleware graduation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1371701/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404848] [NEW] tempest.scenario.test_snapshot_pattern.TestSnapshotPattern.test_snapshot_pattern failing on Jenkins

2014-12-22 Thread Fawad Khaliq
Public bug reported:

tempest.scenario.test_snapshot_pattern.TestSnapshotPattern.test_snapshot_pattern
failing on Jenkins.

Here's the traceback:

2014-12-22 10:19:41.732 | 
2014-12-22 10:19:41.732 | Traceback (most recent call last):
2014-12-22 10:19:41.732 |   File "tempest/test.py", line 112, in wrapper
2014-12-22 10:19:41.732 | return f(self, *func_args, **func_kwargs)
2014-12-22 10:19:41.732 |   File 
"tempest/scenario/test_snapshot_pattern.py", line 69, in test_snapshot_pattern
2014-12-22 10:19:41.732 | server = 
self._boot_image(CONF.compute.image_ref)
2014-12-22 10:19:41.732 |   File 
"tempest/scenario/test_snapshot_pattern.py", line 45, in _boot_image
2014-12-22 10:19:41.733 | return self.create_server(image=image_id, 
create_kwargs=create_kwargs)
2014-12-22 10:19:41.733 |   File "tempest/scenario/manager.py", line 209, 
in create_server
2014-12-22 10:19:41.733 | status='ACTIVE')
2014-12-22 10:19:41.733 |   File 
"tempest/services/compute/json/servers_client.py", line 183, in 
wait_for_server_status
2014-12-22 10:19:41.733 | ready_wait=ready_wait)
2014-12-22 10:19:41.733 |   File "tempest/common/waiters.py", line 66, in 
wait_for_server_status
2014-12-22 10:19:41.733 | resp, body = client.get_server(server_id)
2014-12-22 10:19:41.733 |   File 
"tempest/services/compute/json/servers_client.py", line 142, in get_server
2014-12-22 10:19:41.733 | resp, body = self.get("servers/%s" % 
str(server_id))
2014-12-22 10:19:41.733 |   File "tempest/common/rest_client.py", line 239, 
in get
2014-12-22 10:19:41.733 | return self.request('GET', url, 
extra_headers, headers)
2014-12-22 10:19:41.734 |   File "tempest/common/rest_client.py", line 450, 
in request
2014-12-22 10:19:41.734 | resp, resp_body)
2014-12-22 10:19:41.734 |   File "tempest/common/rest_client.py", line 547, 
in _error_checker
2014-12-22 10:19:41.734 | raise exceptions.ServerFault(message)
2014-12-22 10:19:41.734 | ServerFault: Got server fault
2014-12-22 10:19:41.734 | Details: The server has either erred or is 
incapable of performing the requested operation.
201

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1404848

Title:
  
tempest.scenario.test_snapshot_pattern.TestSnapshotPattern.test_snapshot_pattern
  failing on Jenkins

Status in OpenStack Compute (Nova):
  New

Bug description:
  
tempest.scenario.test_snapshot_pattern.TestSnapshotPattern.test_snapshot_pattern
  failing on Jenkins.

  Here's the traceback:

  2014-12-22 10:19:41.732 | 
  2014-12-22 10:19:41.732 | Traceback (most recent call last):
  2014-12-22 10:19:41.732 |   File "tempest/test.py", line 112, in wrapper
  2014-12-22 10:19:41.732 | return f(self, *func_args, **func_kwargs)
  2014-12-22 10:19:41.732 |   File 
"tempest/scenario/test_snapshot_pattern.py", line 69, in test_snapshot_pattern
  2014-12-22 10:19:41.732 | server = 
self._boot_image(CONF.compute.image_ref)
  2014-12-22 10:19:41.732 |   File 
"tempest/scenario/test_snapshot_pattern.py", line 45, in _boot_image
  2014-12-22 10:19:41.733 | return self.create_server(image=image_id, 
create_kwargs=create_kwargs)
  2014-12-22 10:19:41.733 |   File "tempest/scenario/manager.py", line 209, 
in create_server
  2014-12-22 10:19:41.733 | status='ACTIVE')
  2014-12-22 10:19:41.733 |   File 
"tempest/services/compute/json/servers_client.py", line 183, in 
wait_for_server_status
  2014-12-22 10:19:41.733 | ready_wait=ready_wait)
  2014-12-22 10:19:41.733 |   File "tempest/common/waiters.py", line 66, in 
wait_for_server_status
  2014-12-22 10:19:41.733 | resp, body = client.get_server(server_id)
  2014-12-22 10:19:41.733 |   File 
"tempest/services/compute/json/servers_client.py", line 142, in get_server
  2014-12-22 10:19:41.733 | resp, body = self.get("servers/%s" % 
str(server_id))
  2014-12-22 10:19:41.733 |   File "tempest/common/rest_client.py", line 
239, in get
  2014-12-22 10:19:41.733 | return self.request('GET', url, 
extra_headers, headers)
  2014-12-22 10:19:41.734 |   File "tempest/common/rest_client.py", line 
450, in request
  2014-12-22 10:19:41.734 | resp, resp_body)
  2014-12-22 10:19:41.734 |   File "tempest/common/rest_client.py", line 
547, in _error_checker
  2014-12-22 10:19:41.734 | raise exceptions.ServerFault(message)
  2014-12-22 10:19:41.734 | ServerFault: Got server fault
  2014-12-22 10:19:41.734 | Details: The server has either erred or is 
incapable of performing the requested operation.
  201

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1404848/+subscriptions

-- 
Mailing list: https://launchpa

[Yahoo-eng-team] [Bug 1384235] Re: Nova raises exception about existing libvirt filter

2014-12-22 Thread Eugene Nikanorov
Neutron firewall plugin has nothing to do with filters under discussion

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1384235

Title:
  Nova raises exception about existing libvirt filter

Status in OpenStack Neutron (virtual network service):
  Incomplete
Status in OpenStack Compute (Nova):
  New

Bug description:
  Sometimes, when I start instance, nova raises exception, that
  filter like nova-instance-instance-000b-52540039740a already exists.

  So I have to execute `virsh nwfilter-undefine` and try to boot
  instance again:

  In libvirt logs I can see the following:

  2014-10-22 12:20:13.816+: 4693: error : virNWFilterObjAssignDef:3068 : 
operation failed: filter 'nova-instance-instance-000b-52540039740a' already 
exists with uuid af47118d-1934-4ca7-8a71-c6ae9a6499aa
  2014-10-22 12:20:13.930+: 4688: error : virNetSocketReadWire:1523 : End 
of file while reading data: Input/output error

  I use libvirt 1.2.8-3 ( Debian )

  I have the following services defined:

  service_plugins =
  
neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.firewall.fwaas_plugin.FirewallPlugin

  
  I use Icehouse.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1384235/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404839] [NEW] NUMA topology from image meta data is bugged

2014-12-22 Thread sahid
Public bug reported:

The way we are retrieving NUMA properties from image meta data is coming
from the method 'numa_get_constraints' this method is waiting for the
dict property form image_meta.

We can see on the part of the code this method called with an instance
of image meta data and not the property dict.


We fix it we should always pass the whole object of image.

** Affects: nova
 Importance: High
 Assignee: sahid (sahid-ferdjaoui)
 Status: New

** Changed in: nova
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1404839

Title:
  NUMA topology from image meta data is bugged

Status in OpenStack Compute (Nova):
  New

Bug description:
  The way we are retrieving NUMA properties from image meta data is
  coming from the method 'numa_get_constraints' this method is waiting
  for the dict property form image_meta.

  We can see on the part of the code this method called with an instance
  of image meta data and not the property dict.

  
  We fix it we should always pass the whole object of image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1404839/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404823] [NEW] router-interface-add port succeed but does not add corresponding flows

2014-12-22 Thread Xurong Yang
Public bug reported:

neutron router-interface-add port succeed but does not add corresponding flows, 
operation  defined as follow:
dvr-controller:/etc/init.d # neutron port-create 
db8431b9-4e9e-48a1-96c7-deeca52217de --name port-test
Created a new port:
+---+-+
| Field | Value 
  |
+---+-+
| admin_state_up| True  
  |
| allowed_address_pairs |   
  |
| binding:host_id   |   
  |
| binding:profile   | {}
  |
| binding:vif_details   | {}
  |
| binding:vif_type  | unbound   
  |
| binding:vnic_type | normal
  |
| device_id |   
  |
| device_owner  |   
  |
| fixed_ips | {"subnet_id": "089b9033-1de1-486c-a4ca-b2b1d0e979d9", 
"ip_address": "172.16.20.87"} |
| id| 59f2e155-5076-4ac3-a4e0-2cf8161c0f80  
  |
| mac_address   | fa:16:3e:23:95:bf 
  |
| name  | port-test 
  |
| network_id| db8431b9-4e9e-48a1-96c7-deeca52217de  
  |
| security_groups   | dadc55f6-f2a8-42ea-b263-5c1e9ca8782f  
  |
| status| DOWN  
  |
| tenant_id | db1921917d8543b1ba7ff9b1f1df6081  
  |
+---+-+
dvr-controller:/etc/init.d # neutron router-interface-add dvr 
port=59f2e155-5076-4ac3-a4e0-2cf8161c0f80
Added interface 59f2e155-5076-4ac3-a4e0-2cf8161c0f80 to router dvr.
 
wrong log:
2014-12-22 03:50:55.302 9595 ERROR neutron.db.dvr_mac_db 
[req-7932fee8-651c-4496-82c3-dec846e23e5d None] Could not retrieve gateway port 
for subnet {'name': u'vxlan-subnet2', 'enable_dhcp': True, 'network_id': 
u'db8431b9-4e9e-48a1-96c7-deeca52217de', 'tenant_id': 
u'db1921917d8543b1ba7ff9b1f1df6081', 'dns_nameservers': [], 'gateway_ip': 
u'172.16.20.1', 'ipv6_ra_mode': None, 'allocation_pools': [{'start': 
u'172.16.20.2', 'end': u'172.16.20.250'}], 'host_routes': [], 'shared': False, 
'ip_version': 4L, 'ipv6_address_mode': None, 'cidr': u'172.16.20.0/24', 'id': 
u'089b9033-1de1-486c-a4ca-b2b1d0e979d9'}
 
We can find that if we add port to router as gw interface, it can not retrieve 
gateway port because filter by the gateway ip.

** Affects: neutron
 Importance: Undecided
 Assignee: Xurong Yang (idopra)
 Status: New


** Tags: dvr

** Changed in: neutron
 Assignee: (unassigned) => Xurong Yang (idopra)

** Tags added: dvr

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1404823

Title:
  router-interface-add port succeed but does not add corresponding flows

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  neutron router-interface-add port succeed but does not add corresponding 
flows, operation  defined as follow:
  dvr-controller:/etc/init.d # neutron port-create 
db8431b9-4e9e-48a1-96c7-deeca52217de --name port-test
  Created a new port:
  
+---+-+
  | Field | Value   
|
  
+---+-+
  | admin_state_up| True
|
  | allowed_address_pairs | 
|
  | binding:host_id   | 
 

[Yahoo-eng-team] [Bug 1404817] Re: On a /16 range floating ip is assigned /32 - cannot connect

2014-12-22 Thread Eduard Biceri-Matei
Issue with secgroup not with addressing. 
 

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1404817

Title:
  On a /16 range floating ip is assigned /32 - cannot connect

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Devstack 2015.1, single box.

  Trying to get floating ips to work across 2 /24 blocks  (on a /16 subnet)
  Localrc:

  HOST_IP=10.100.130.8 # public ip of host, 10.100.0.0/16 subnet
  FLAT_INTERFACE=eth0
  FIXED_RANGE=10.140.129.0/24 # private 
  FIXED_NETWORK_SIZE=255
  FLOATING_RANGE=10.100.129.0/16 #publicly reachable network for vms, also in 
10.100.0.0/16, but only 10.100.129.X block
  MULTI_HOST=0
  LOGFILE=/opt/stack/logs/stack.sh.log
  ADMIN_PASSWORD=*
  MYSQL_PASSWORD=*
  RABBIT_PASSWORD=*
  SERVICE_PASSWORD=*
  SERVICE_TOKEN=*

  Creating a guest :
  assigned ips private=10.140.129.3, 10.100.129.2

  On the host itself (ip a l)
  inet 10.100.129.2/32 scope global br100
 valid_lft forever preferred_lft forever

  
  Address is not accessible from the outside (because it's a /32, expected a 
/16)

  Initial pool was created with range 10.100.0.0 - 10.100.255.255 and
  then started assigning ips from 10.100.0.1 conflicting with the
  router, and also ips were assigned /32

  I deleted the pool and created a new one: 10.100.129.0 -
  10.100.129.255 but ips are still /32.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1404817/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404820] [NEW] Typo: glance exception additional dot

2014-12-22 Thread liaonanhai
Public bug reported:

when I try to fix the glanceclient html output, i found the result has 
additional dot before the word quota,as follow
Denying attempt to upload image because it exceeds the .quota: The size of the 
data 41126400 will exceed the limit. 11210240 bytes remaining.

** Affects: glance
 Importance: Undecided
 Assignee: liaonanhai (nanhai-liao)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => liaonanhai (nanhai-liao)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1404820

Title:
  Typo: glance exception additional dot

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  when I try to fix the glanceclient html output, i found the result has 
additional dot before the word quota,as follow
  Denying attempt to upload image because it exceeds the .quota: The size of 
the data 41126400 will exceed the limit. 11210240 bytes remaining.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1404820/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404819] [NEW] wrong self link of endpoint_groups API

2014-12-22 Thread wanghong
Public bug reported:

OS-EP-FILTER prefix is missing in the url:

curl -i -H "X-Auth-Token:$token" -H "Content-Type:application/json"
http://127.0.0.1:35357/v3/OS-EP-FILTER/endpoint_groups -d
'{"endpoint_group":{"description":"endpoint group
description","filters":
{"interface":"admin"},"name":"endpoint_group_name"}}'

{"endpoint_group": {"description": "endpoint group description",
"links": {"self":
"http://127.0.0.1:35357/v3/endpoint_groups/bb8eee6863aa4b6099e5c5b6139fd53d"},
"id": "bb8eee6863aa4b6099e5c5b6139fd53d", "filters": {"interface":
"admin"}, "name": "endpoint_group_name"}}

** Affects: keystone
 Importance: Undecided
 Assignee: wanghong (w-wanghong)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => wanghong (w-wanghong)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1404819

Title:
  wrong self link of endpoint_groups API

Status in OpenStack Identity (Keystone):
  New

Bug description:
  OS-EP-FILTER prefix is missing in the url:

  curl -i -H "X-Auth-Token:$token" -H "Content-Type:application/json"
  http://127.0.0.1:35357/v3/OS-EP-FILTER/endpoint_groups -d
  '{"endpoint_group":{"description":"endpoint group
  description","filters":
  {"interface":"admin"},"name":"endpoint_group_name"}}'

  {"endpoint_group": {"description": "endpoint group description",
  "links": {"self":
  "http://127.0.0.1:35357/v3/endpoint_groups/bb8eee6863aa4b6099e5c5b6139fd53d"},
  "id": "bb8eee6863aa4b6099e5c5b6139fd53d", "filters": {"interface":
  "admin"}, "name": "endpoint_group_name"}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1404819/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404817] [NEW] On a /16 range floating ip is assigned /32 - cannot connect

2014-12-22 Thread Eduard Biceri-Matei
Public bug reported:

Devstack 2015.1, single box.

Trying to get floating ips to work across 2 /24 blocks  (on a /16 subnet)
Localrc:

HOST_IP=10.100.130.8 # public ip of host, 10.100.0.0/16 subnet
FLAT_INTERFACE=eth0
FIXED_RANGE=10.140.129.0/24 # private 
FIXED_NETWORK_SIZE=255
FLOATING_RANGE=10.100.129.0/16 #publicly reachable network for vms, also in 
10.100.0.0/16, but only 10.100.129.X block
MULTI_HOST=0
LOGFILE=/opt/stack/logs/stack.sh.log
ADMIN_PASSWORD=*
MYSQL_PASSWORD=*
RABBIT_PASSWORD=*
SERVICE_PASSWORD=*
SERVICE_TOKEN=*

Creating a guest :
assigned ips private=10.140.129.3, 10.100.129.2

On the host itself (ip a l)
inet 10.100.129.2/32 scope global br100
   valid_lft forever preferred_lft forever


Address is not accessible from the outside (because it's a /32, expected a /16)

Initial pool was created with range 10.100.0.0 - 10.100.255.255 and then
started assigning ips from 10.100.0.1 conflicting with the router, and
also ips were assigned /32

I deleted the pool and created a new one: 10.100.129.0 - 10.100.129.255
but ips are still /32.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1404817

Title:
  On a /16 range floating ip is assigned /32 - cannot connect

Status in OpenStack Compute (Nova):
  New

Bug description:
  Devstack 2015.1, single box.

  Trying to get floating ips to work across 2 /24 blocks  (on a /16 subnet)
  Localrc:

  HOST_IP=10.100.130.8 # public ip of host, 10.100.0.0/16 subnet
  FLAT_INTERFACE=eth0
  FIXED_RANGE=10.140.129.0/24 # private 
  FIXED_NETWORK_SIZE=255
  FLOATING_RANGE=10.100.129.0/16 #publicly reachable network for vms, also in 
10.100.0.0/16, but only 10.100.129.X block
  MULTI_HOST=0
  LOGFILE=/opt/stack/logs/stack.sh.log
  ADMIN_PASSWORD=*
  MYSQL_PASSWORD=*
  RABBIT_PASSWORD=*
  SERVICE_PASSWORD=*
  SERVICE_TOKEN=*

  Creating a guest :
  assigned ips private=10.140.129.3, 10.100.129.2

  On the host itself (ip a l)
  inet 10.100.129.2/32 scope global br100
 valid_lft forever preferred_lft forever

  
  Address is not accessible from the outside (because it's a /32, expected a 
/16)

  Initial pool was created with range 10.100.0.0 - 10.100.255.255 and
  then started assigning ips from 10.100.0.1 conflicting with the
  router, and also ips were assigned /32

  I deleted the pool and created a new one: 10.100.129.0 -
  10.100.129.255 but ips are still /32.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1404817/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp