[Yahoo-eng-team] [Bug 1294980] [NEW] neutron.tests.unit.services.loadbalancer.test_agent_scheduler.LBaaSAgentSchedulerTestCaseXML.test_report_states fails

2014-03-19 Thread Isaku Yamahata
Public bug reported:

http://logs.openstack.org/18/76418/4/check/gate-neutron-python26/40305f8/testr_results.html.gz

ft1.8524: 
neutron.tests.unit.services.loadbalancer.test_agent_scheduler.LBaaSAgentSchedulerTestCaseXML.test_report_states_StringException:
 Empty attachments:
  stderr
  stdout

pythonlogging:'': {{{
2014-03-20 03:47:28,830 INFO [neutron.manager] Loading core plugin: 
neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
2014-03-20 03:47:28,938 INFO 
[neutron.plugins.openvswitch.ovs_neutron_plugin] Network VLAN ranges: {}
2014-03-20 03:47:28,941 INFO [neutron.manager] Service L3_ROUTER_NAT is 
supported by the core plugin
2014-03-20 03:47:28,941 INFO [neutron.manager] Loading Plugin: 
neutron.services.loadbalancer.plugin.LoadBalancerPlugin
2014-03-20 03:47:29,030 INFO [neutron.api.extensions] Initializing 
extension manager.
2014-03-20 03:47:29,030ERROR [neutron.api.extensions] Extension path 
'unit/extensions' doesn't exist!
2014-03-20 03:47:29,030 INFO [neutron.api.extensions] Loading extension 
file: __init__.py
2014-03-20 03:47:29,030 INFO [neutron.api.extensions] Loading extension 
file: __init__.pyc
2014-03-20 03:47:29,031 INFO [neutron.api.extensions] Loading extension 
file: agent.py
2014-03-20 03:47:29,031 INFO [neutron.api.extensions] Loaded extension: 
agent
2014-03-20 03:47:29,032 INFO [neutron.api.extensions] Loading extension 
file: agent.pyc
2014-03-20 03:47:29,032 INFO [neutron.api.extensions] Loading extension 
file: allowedaddresspairs.py
2014-03-20 03:47:29,033 INFO [neutron.api.extensions] Loaded extension: 
allowed-address-pairs
2014-03-20 03:47:29,033 INFO [neutron.api.extensions] Loading extension 
file: allowedaddresspairs.pyc
2014-03-20 03:47:29,033 INFO [neutron.api.extensions] Loading extension 
file: dhcpagentscheduler.py
2014-03-20 03:47:29,034 INFO [neutron.api.extensions] Loaded extension: 
dhcp_agent_scheduler
2014-03-20 03:47:29,034 INFO [neutron.api.extensions] Loading extension 
file: dhcpagentscheduler.pyc
2014-03-20 03:47:29,034 INFO [neutron.api.extensions] Loading extension 
file: external_net.py
2014-03-20 03:47:29,035 INFO [neutron.api.extensions] Loaded extension: 
external-net
2014-03-20 03:47:29,035 INFO [neutron.api.extensions] Loading extension 
file: external_net.pyc
2014-03-20 03:47:29,035 INFO [neutron.api.extensions] Loading extension 
file: extra_dhcp_opt.py
2014-03-20 03:47:29,036 INFO [neutron.api.extensions] Loaded extension: 
extra_dhcp_opt
2014-03-20 03:47:29,036 INFO [neutron.api.extensions] Loading extension 
file: extra_dhcp_opt.pyc
2014-03-20 03:47:29,036 INFO [neutron.api.extensions] Loading extension 
file: extraroute.py
2014-03-20 03:47:29,037 INFO [neutron.api.extensions] Loaded extension: 
extraroute
2014-03-20 03:47:29,037 INFO [neutron.api.extensions] Loading extension 
file: extraroute.pyc
2014-03-20 03:47:29,037 INFO [neutron.api.extensions] Loading extension 
file: firewall.py
2014-03-20 03:47:29,039  WARNING [neutron.api.extensions] Extension fwaas not 
supported by any of loaded plugins
2014-03-20 03:47:29,039 INFO [neutron.api.extensions] Loading extension 
file: firewall.pyc
2014-03-20 03:47:29,039 INFO [neutron.api.extensions] Loading extension 
file: flavor.py
2014-03-20 03:47:29,040  WARNING [neutron.api.extensions] Extension flavor not 
supported by any of loaded plugins
2014-03-20 03:47:29,040 INFO [neutron.api.extensions] Loading extension 
file: flavor.pyc
2014-03-20 03:47:29,040 INFO [neutron.api.extensions] Loading extension 
file: l3.py
2014-03-20 03:47:29,041 INFO [neutron.api.extensions] Loaded extension: 
router
2014-03-20 03:47:29,041 INFO [neutron.api.extensions] Loading extension 
file: l3.pyc
2014-03-20 03:47:29,042 INFO [neutron.api.extensions] Loading extension 
file: l3_ext_gw_mode.py
2014-03-20 03:47:29,042 INFO [neutron.api.extensions] Loaded extension: 
ext-gw-mode
2014-03-20 03:47:29,042 INFO [neutron.api.extensions] Loading extension 
file: l3_ext_gw_mode.pyc
2014-03-20 03:47:29,043 INFO [neutron.api.extensions] Loading extension 
file: l3agentscheduler.py
2014-03-20 03:47:29,043 INFO [neutron.api.extensions] Loaded extension: 
l3_agent_scheduler
2014-03-20 03:47:29,044 INFO [neutron.api.extensions] Loading extension 
file: l3agentscheduler.pyc
2014-03-20 03:47:29,044 INFO [neutron.api.extensions] Loading extension 
file: lbaas_agentscheduler.py
2014-03-20 03:47:29,045 INFO [neutron.api.extensions] Loaded extension: 
lbaas_agent_scheduler
2014-03-20 03:47:29,045 INFO [neutron.api.extensions] Loading extension 
file: lbaas_agentscheduler.pyc
2014-03-20 03:47:29,045 INFO [neutron.api.extensions] Loading extension 
file: loadbalancer.py
2014-03-20 03:47:29,047 INFO [neutron.api.extensions] Loaded extension: 
lbaas
2014-03-20 03:47:29,047 INFO [neutron.api.extensions] Loading extension 
file: loadbal

[Yahoo-eng-team] [Bug 1294974] [NEW] Can't easily stop a transaction from yielding

2014-03-19 Thread Kevin Benton
Public bug reported:

Many of the current plugins (e.g. ML2) operate under the assumption that
the greenthread will not cooperatively yield during a transaction. When
this assumption is broken, there is a risk for a mysql/eventlet
deadlock.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1294974

Title:
  Can't easily stop a transaction from yielding

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Many of the current plugins (e.g. ML2) operate under the assumption
  that the greenthread will not cooperatively yield during a
  transaction. When this assumption is broken, there is a risk for a
  mysql/eventlet deadlock.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1294974/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294973] [NEW] Better error message is required when glance is dead while capturing

2014-03-19 Thread Manjunath
Public bug reported:

1. Bring down the glance
Stopping openstack-glance-api: [  OK  ]
Stopping openstack-glance-registry:[  OK  ]
[root@nimbuswrkl665 images]#
2. Invoke createImage operation for the server.

manjunath@manjunath-ThinkPad-T420:˜/Desktop$ curl -k -i -X POST -H "X
-Auth-Token: 6928aacccede46d7921c57fb827cd9be" -H "Content-Type:
application/json"  -d '{"createImage": {"name": "vm1_failed",
"metadata": {}}}'
https://nimbuswrkl665.rtp.stglabs.ibm.com/powervc/openstack/compute/v2/dd73d8ab4ea7498e8d103206e37c65b5/servers/93476dc9
-c70e-4b73-af3e-1309fecdaf95/action

Response :

HTTP/1.1 500 Internal Server Error
Date: Mon, 10 Mar 2014 14:34:27 GMT
Content-Type: application/json; charset=UTF-8
X-Compute-Request-Id: req-75d8138c-1d32-4bb7-888f-7aa48bdd2011
Cache-control: no-cache
Pragma: no-cache
Content-Length: 128
Connection: close

{"computeFault": {"message": "The server has either erred or is
incapable of performing the requested operation.", "code":
500}}manjunath@manjunath-Thimanjunath@manjunath-ThinkPad-T420:˜/Desktop$


Message needs to be Improved, I could see clear message in api.log but it is 
not passing to the REST call.!

2014-03-10 10:34:27.130 4713 ERROR nova.image.glance 
[req-75d8138c-1d32-4bb7-888f-7aa48bdd2011 0 dd73d8ab4ea7498e8d103206e37c65b5] 
Error contacting glance server '9.37.74.232:9292' for 'get', retrying.
2014-03-10 10:34:27.130 4713 TRACE nova.image.glance Traceback (most recent 
call last):
2014-03-10 10:34:27.130 4713 TRACE nova.image.glance   File 
"/usr/lib/python2.6/site-packages/nova/image/glance.py", line 211, in call
2014-03-10 10:34:27.130 4713 TRACE nova.image.glance return 
getattr(client.images, method)(*args, **kwargs)
2014-03-10 10:34:27.130 4713 TRACE nova.image.glance   File 
"/usr/lib/python2.6/site-packages/glanceclient/v1/images.py", line 114, in get
2014-03-10 10:34:27.130 4713 TRACE nova.image.glance % 
urllib.quote(str(image_id)))
2014-03-10 10:34:27.130 4713 TRACE nova.image.glance   File 
"/usr/lib/python2.6/site-packages/glanceclient/common/http.py", line 289, in 
raw_request
2014-03-10 10:34:27.130 4713 TRACE nova.image.glance return 
self._http_request(url, method, **kwargs)
2014-03-10 10:34:27.130 4713 TRACE nova.image.glance   File 
"/usr/lib/python2.6/site-packages/glanceclient/common/http.py", line 235, in 
_http_request
2014-03-10 10:34:27.130 4713 TRACE nova.image.glance raise 
exc.CommunicationError(message=message)
2014-03-10 10:34:27.130 4713 TRACE nova.image.glance CommunicationError: Error 
communicating with http://9.37.74.232:9292 [Errno 111] ECONNREFUSED
2014-03-10 10:34:27.130 4713 TRACE nova.image.glance
2014-03-10 10:34:27.240 4713 WARNING nova.compute.utils 
[req-75d8138c-1d32-4bb7-888f-7aa48bdd2011 0 dd73d8ab4ea7498e8d103206e37c65b5] 
[instance: 93476dc9-c70e-4b73-af3e-1309fecdaf95] NV-6BF9597 Can't access image 
36f6d470-3a3f-4113-8724-b018280c8f27: NV-BAD1189 Connection to glance host 
9.37.74.232:9292 failed: Error communicating with http://9.37.74.232:9292 
[Errno 111] ECONNREFUSED
2014-03-10 10:34:27.244 4713 ERROR nova.image.glance 
[req-75d8138c-1d32-4bb7-888f-7aa48bdd2011 0 dd73d8ab4ea7498e8d103206e37c65b5] 
Error contacting glance server '9.37.74.232:9292' for 'create', retrying.
2014-03-10 10:34:27.244 4713 TRACE nova.image.glance Traceback (most recent 
call last):
2014-03-10 10:34:27.244 4713 TRACE nova.image.glance   File 
"/usr/lib/python2.6/site-packages/nova/image/glance.py", line 211, in call
2014-03-10 10:34:27.244 4713 TRACE nova.image.glance return 
getattr(client.images, method)(*args, **kwargs)
2014-03-10 10:34:27.244 4713 TRACE nova.image.glance   File 
"/usr/lib/python2.6/site-packages/glanceclient/v1/images.py", line 253, in 
create
2014-03-10 10:34:27.244 4713 TRACE nova.image.glance 'POST', '/v1/images', 
headers=hdrs, body=image_data)
2014-03-10 10:34:27.244 4713 TRACE nova.image.glance   File 
"/usr/lib/python2.6/site-packages/glanceclient/common/http.py", line 289, in 
raw_request
2014-03-10 10:34:27.244 4713 TRACE nova.image.glance return 
self._http_request(url, method, **kwargs)
2014-03-10 10:34:27.244 4713 TRACE nova.image.glance   File 
"/usr/lib/python2.6/site-packages/glanceclient/common/http.py", line 235, in 
_http_request
2014-03-10 10:34:27.244 4713 TRACE nova.image.glance raise 
exc.CommunicationError(message=message)
2014-03-10 10:34:27.244 4713 TRACE nova.image.glance CommunicationError: Error 
communicating with http://9.37.74.232:9292 [Errno 111] ECONNREFUSED
2014-03-10 10:34:27.244 4713 TRACE nova.image.glance
2014-03-10 10:34:27.255 4713 ERROR nova.api.openstack 
[req-75d8138c-1d32-4bb7-888f-7aa48bdd2011 0 dd73d8ab4ea7498e8d103206e37c65b5] 
NV-A68A08C Caught error: NV-BAD1189 Connection to glance host 9.37.74.232:9292 
failed: Error communicating with http://9.37.74.232:9292 [Errno 111] 
ECONNREFUSED
2014-03-10 10:34:27.255 4713 TRACE nova.api.openstack Traceback (most recent 
call last):
201

[Yahoo-eng-team] [Bug 1293784] Re: Need better default for nova_url

2014-03-19 Thread Srinivasa T N
Thanks james for pointing this out.  Since nova_url in /etc/neutron.conf
overrides the default value in config.py, I think it did not create any
problem till now, right?

Should I make the changes and progress with the bug?

Regards,
Seenu.

** Changed in: neutron
   Status: New => Opinion

** Changed in: neutron
 Assignee: (unassigned) => Srinivasa T N (seenutn)

** Changed in: neutron
   Status: Opinion => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1293784

Title:
  Need better default for nova_url

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  While looking into https://bugs.launchpad.net/tripleo/+bug/1293782, we
  noticed that it seems the default for nova_url in neutron.conf is
  http://127.0.0.1:8774

  From common/config.py:
  cfg.StrOpt('nova_url',
     default='http://127.0.0.1:8774',
     help=_('URL for connection to nova')),

  Is this really a sane default? Wouldn't http://127.0.0.1:8774/v2 be
  more correct?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1293784/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294971] [NEW] gate-grenade-dsvm-partial-ncpu Failure in upgrade-keystone

2014-03-19 Thread Attila Fazekas
Public bug reported:

http://logs.openstack.org/03/81603/2/gate/gate-grenade-dsvm-partial-ncpu/4165949/console.html#_2014-03-20_01_34_10_033
console.log:
2014-03-20 01:34:10.033 | [ERROR] /opt/stack/new/devstack/lib/keystone:439 
keystone did not start
2014-03-20 01:34:11.034 | + die 249 'Failure in upgrade-keystone'
2014-03-20 01:34:11.035 | + local exitcode=1
2014-03-20 01:34:11.035 | + set +o xtrace
2014-03-20 01:34:11.035 | [Call Trace]
2014-03-20 01:34:11.035 | ./grenade.sh:249:die
2014-03-20 01:34:11.073 | [ERROR] ./grenade.sh:249 Failure in upgrade-keystone

screen-key.log
http://logs.openstack.org/03/81603/2/gate/gate-grenade-dsvm-partial-ncpu/4165949/logs/new/screen-key.txt.gz#_2014-03-20_01_32_10_651
2014-03-20 01:32:10.651 30797 CRITICAL keystone [-] NameError: global name '_' 
is not defined
2014-03-20 01:32:10.651 30797 TRACE keystone Traceback (most recent call last):
2014-03-20 01:32:10.651 30797 TRACE keystone   File 
"/opt/stack/new/keystone/bin/keystone-all", line 146, in 
2014-03-20 01:32:10.651 30797 TRACE keystone serve(*servers)
2014-03-20 01:32:10.651 30797 TRACE keystone   File 
"/opt/stack/new/keystone/bin/keystone-all", line 80, in serve
2014-03-20 01:32:10.651 30797 TRACE keystone logging.exception(_('Failed to 
start the %(name)s server') % {
2014-03-20 01:32:10.651 30797 TRACE keystone NameError: global name '_' is not 
defined
2014-03-20 01:32:10.651 30797 TRACE keystone 
key failed to start

** Affects: grenade
 Importance: Undecided
 Status: New

** Affects: keystone
 Importance: Undecided
 Status: New

** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1294971

Title:
  gate-grenade-dsvm-partial-ncpu Failure in upgrade-keystone

Status in Grenade - OpenStack upgrade testing:
  New
Status in OpenStack Identity (Keystone):
  New

Bug description:
  
http://logs.openstack.org/03/81603/2/gate/gate-grenade-dsvm-partial-ncpu/4165949/console.html#_2014-03-20_01_34_10_033
  console.log:
  2014-03-20 01:34:10.033 | [ERROR] /opt/stack/new/devstack/lib/keystone:439 
keystone did not start
  2014-03-20 01:34:11.034 | + die 249 'Failure in upgrade-keystone'
  2014-03-20 01:34:11.035 | + local exitcode=1
  2014-03-20 01:34:11.035 | + set +o xtrace
  2014-03-20 01:34:11.035 | [Call Trace]
  2014-03-20 01:34:11.035 | ./grenade.sh:249:die
  2014-03-20 01:34:11.073 | [ERROR] ./grenade.sh:249 Failure in upgrade-keystone

  screen-key.log
  
http://logs.openstack.org/03/81603/2/gate/gate-grenade-dsvm-partial-ncpu/4165949/logs/new/screen-key.txt.gz#_2014-03-20_01_32_10_651
  2014-03-20 01:32:10.651 30797 CRITICAL keystone [-] NameError: global name 
'_' is not defined
  2014-03-20 01:32:10.651 30797 TRACE keystone Traceback (most recent call 
last):
  2014-03-20 01:32:10.651 30797 TRACE keystone   File 
"/opt/stack/new/keystone/bin/keystone-all", line 146, in 
  2014-03-20 01:32:10.651 30797 TRACE keystone serve(*servers)
  2014-03-20 01:32:10.651 30797 TRACE keystone   File 
"/opt/stack/new/keystone/bin/keystone-all", line 80, in serve
  2014-03-20 01:32:10.651 30797 TRACE keystone logging.exception(_('Failed 
to start the %(name)s server') % {
  2014-03-20 01:32:10.651 30797 TRACE keystone NameError: global name '_' is 
not defined
  2014-03-20 01:32:10.651 30797 TRACE keystone 
  key failed to start

To manage notifications about this bug go to:
https://bugs.launchpad.net/grenade/+bug/1294971/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258767] Re: Enable VMWare ESXDriver support set_host_enabled

2014-03-19 Thread Jay Lau
ESX driver was now deprecated.

** Changed in: nova
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1258767

Title:
  Enable VMWare ESXDriver support set_host_enabled

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  The API of set_host_enabled was not supportted in VMWare ESXDriver, we
  should support this feature.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1258767/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1285165] Re: test_add_remove_router_interface_with_port_id fails

2014-03-19 Thread Mauro Sergio Martins Rodrigues
does it still happen?

adding neutron to affected projects.

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: tempest
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1285165

Title:
  test_add_remove_router_interface_with_port_id fails

Status in OpenStack Neutron (virtual network service):
  New
Status in Tempest:
  Invalid

Bug description:
  Full log:
  
http://logs.openstack.org/54/72854/6/check/check-tempest-dsvm-neutron-pg/b4e8e1f/logs/testr_results.html.gz

  
tempest.api.network.test_routers.RoutersTest.test_add_remove_router_interface_with_port_id[gate,smoke]_StringException:
 Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  2014-02-26 13:07:19,174 Request: POST http://127.0.0.1:5000/v2.0/tokens
  2014-02-26 13:07:19,174 Request Headers: {'Content-Type': 'application/json', 
'Accept': 'application/json'}
  2014-02-26 13:07:19,174 Request Body: {"auth": {"tenantName": 
"RoutersTest-1350141744", "passwordCredentials": {"username": 
"RoutersTest-1283447519", "password": "pass"}}}
  2014-02-26 13:07:19,322 Response Status: 200
  2014-02-26 13:07:19,322 Response Headers: {'content-length': '11138', 'date': 
'Wed, 26 Feb 2014 13:07:19 GMT', 'content-type': 'application/json', 'vary': 
'X-Auth-Token', 'connection': 'close'}
  2014-02-26 13:07:19,323 Response Body: {"access": {"token": {"issued_at": 
"2014-02-26T13:07:19.291621", "expires": "2014-02-26T14:07:19Z", "id": 
"MIITZgYJKoZIhvcNAQcCoIITVzCCE1MCAQExCTAHBgUrDgMCGjCCEbwGCSqGSIb3DQEHAaCCEa0EghGpeyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxNC0wMi0yNlQxMzowNzoxOS4yOTE2MjEiLCAiZXhwaXJlcyI6ICIyMDE0LTAyLTI2VDE0OjA3OjE5WiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogIlJvdXRlcnNUZXN0LTEzNTAxNDE3NDQtZGVzYyIsICJlbmFibGVkIjogdHJ1ZSwgImlkIjogImEyZDIwZDIyODg5MDQ2NTM4YjE5YTVlYzE0MzAzNGU4IiwgIm5hbWUiOiAiUm91dGVyc1Rlc3QtMTM1MDE0MTc0NCJ9fSwgInNlcnZpY2VDYXRhbG9nIjogW3siZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo4Nzc0L3YyL2EyZDIwZDIyODg5MDQ2NTM4YjE5YTVlYzE0MzAzNGU4IiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo4Nzc0L3YyL2EyZDIwZDIyODg5MDQ2NTM4YjE5YTVlYzE0MzAzNGU4IiwgImlkIjogIjZjYmFjNTE4MDllZDQxOTVhN2YwZjVmZmRmOWU1OGJkIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzQvdjIvYTJkMjBkMjI4ODkwNDY1Mz
 
hiMTlhNWVjMTQzMDM0ZTgifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAiY29tcHV0ZSIsICJuYW1lIjogIm5vdmEifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjk2OTYvIiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo5Njk2LyIsICJpZCI6ICIxNTFhOTFiNjRkNDE0ZjZlODc3MjFiYzNlMjc2NmNjMSIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo5Njk2LyJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJuZXR3b3JrIiwgIm5hbWUiOiAibmV1dHJvbiJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODc3Ni92Mi9hMmQyMGQyMjg4OTA0NjUzOGIxOWE1ZWMxNDMwMzRlOCIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODc3Ni92Mi9hMmQyMGQyMjg4OTA0NjUzOGIxOWE1ZWMxNDMwMzRlOCIsICJpZCI6ICJkYWNjMjQ1NDM1NTc0ZWJkYjdlY2JkMjc2YzI5MzQ3YyIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo4Nzc2L3YyL2EyZDIwZDIyODg5MDQ2NTM4YjE5YTVlYzE0MzAzNGU4In1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInZvbHVtZXYyIiwgIm5hbWUiOiAiY2luZGVydjIifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8
 
vMTI3LjAuMC4xOjg3NzQvdjMiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTI3L
  2014-02-26 13:07:19,323 Large body (11138) md5 summary: 
e2591b9572309e9279426e6c3f734264
  2014-02-26 13:07:19,324 Request: POST http://127.0.0.1:9696/v2.0/networks
  2014-02-26 13:07:19,324 Request Headers: {'Content-Type': 'application/json', 
'Accept': 'application/json', 'X-Auth-Token': ''}
  2014-02-26 13:07:19,324 Request Body: {"network": {"name": 
"test-network--1963481106"}}
  2014-02-26 13:07:19,507 Response Status: 201
  2014-02-26 13:07:19,507 OpenStack request id 
req-b0e7bcdf-9f1d-4171-973f-b00c2cbec9b9
  2014-02-26 13:07:19,508 Response Headers: {'content-length': '220', 'date': 
'Wed, 26 Feb 2014 13:07:19 GMT', 'content-type': 'application/json; 
charset=UTF-8', 'connection': 'close'}
  2014-02-26 13:07:19,508 Response Body: {"network": {"status": "ACTIVE", 
"subnets": [], "name": "test-network--1963481106", "admin_state_up": true, 
"tenant_id": "a2d20d22889046538b19a5ec143034e8", "shared": false, "id": 
"d70222a5-7488-41ea-89c4-43cb82607668"}}
  2014-02-26 13:07:19,509 Request: POST http://127.0.0.1:9696/v2.0/subnets
  2014-02-26 13:07:19,510 Request Headers: {'Content-Type': 'application/json', 
'Accept': 'application/json', 'X-Auth-Token': ''}
  2014-02-26 13:07:19,510 Request Body: {"subnet": {"network_id": 
"d70222a5-7488-41ea-89c4-43cb82607668", "ip_version": 4, "cidr": 
"10.100.0.0/28"}}
  2014-02-26 13:07:19,610 Response Status: 201
  2014-02-26 13:07:19,610 OpenStack request id 
req-430cbc36-ad09-4984-8cf7-2be

[Yahoo-eng-team] [Bug 1294939] [NEW] Add a fixed IP to an instance failed

2014-03-19 Thread jichencom
Public bug reported:

+--+---+-+
| ID   | Label | CIDR|
+--+---+-+
| be95de64-a2aa-42de-a522-37802cdbe133 | vmnet | 10.0.0.0/24 |
| 0fd904f5-1870-4066-8213-94038b49be2e | abc   | 10.1.0.0/24 |
| 7cd88ead-fd42-4441-9182-72b3164c108d | abd   | 10.2.0.0/24 |
+--+---+-+

nova  add-fixed-ip test15 0fd904f5-1870-4066-8213-94038b49be2e

failed with following logs


2014-03-19 03:29:30.546 7822 ERROR nova.openstack.common.rpc.amqp 
[req-fd087223-3646-4fed-b0f6-5a5cf50828eb d6779a827003465db2d3c52fe135d926 
45210fba73d24dd681dc5c292c6b1e7f] Exception during message handling
2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py", line 461, 
in _process_data
2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp **args)
2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py", 
line 172, in dispatch
2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp result = 
getattr(proxyobj, method)(ctxt, **kwargs)
2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/network/manager.py", line 772, in 
add_fixed_ip_to_instance
2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp 
self._allocate_fixed_ips(context, instance_id, host, [network])
2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/network/manager.py", line 214, in 
_allocate_fixed_ips
2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp vpn=vpn, 
address=address)
2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/network/manager.py", line 881, in 
allocate_fixed_ip
2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp 
self.quotas.rollback(context, reservations)
2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/network/manager.py", line 859, in 
allocate_fixed_ip
2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp 
'virtual_interface_id': vif['id']}
2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp TypeError: 
'NoneType' object is unsubscriptable
2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp

** Affects: nova
 Importance: Undecided
 Assignee: jichencom (jichenjc)
 Status: New


** Tags: nova-network

** Changed in: nova
 Assignee: (unassigned) => jichencom (jichenjc)

** Tags added: nova-network

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1294939

Title:
  Add a fixed IP to an instance failed

Status in OpenStack Compute (Nova):
  New

Bug description:
  +--+---+-+
  | ID   | Label | CIDR|
  +--+---+-+
  | be95de64-a2aa-42de-a522-37802cdbe133 | vmnet | 10.0.0.0/24 |
  | 0fd904f5-1870-4066-8213-94038b49be2e | abc   | 10.1.0.0/24 |
  | 7cd88ead-fd42-4441-9182-72b3164c108d | abd   | 10.2.0.0/24 |
  +--+---+-+

  nova  add-fixed-ip test15 0fd904f5-1870-4066-8213-94038b49be2e

  failed with following logs

  
  2014-03-19 03:29:30.546 7822 ERROR nova.openstack.common.rpc.amqp 
[req-fd087223-3646-4fed-b0f6-5a5cf50828eb d6779a827003465db2d3c52fe135d926 
45210fba73d24dd681dc5c292c6b1e7f] Exception during message handling
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py", line 461, 
in _process_data
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp **args)
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py", 
line 172, in dispatch
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp result 
= getattr(proxyobj, method)(ctxt, **kwargs)
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/network/manager.py", line 772, in 
add_fixed_ip_to_instance
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp 
self._allocate_fixed_ips(context, instance_id, host, [network])
  2014-03-19 03:29:30.

[Yahoo-eng-team] [Bug 1294942] [NEW] eventlet should not yield inside db transactions that hold locks

2014-03-19 Thread Maru Newby
Public bug reported:

Whenever an eventlet yield occurs in a db transaction in which one or
more db locks are held, the potential for deadlock exists.

Yields can be triggered by:
  a. network IO (e.g. a plugin calling out to another application)
  b. lock contention (e.g. when code attempts and fails to acquire a semaphore, 
RLock, etc)

It's not always obvious when reviewing code whether either trigger is
possible inside a db transaction.  In some cases, neither submitter nor
reviewer will be aware that a 3rd party library being used inside a
transaction relies upon a locking primitive (e.g. logging uses RLock
internally).  In other cases, it won't always be obvious from the patch
that a method being changed will be wrapped in a transaction (e.g. an
ML2 driver that implements a *_precommit method).

A suggested way of detecting yields so that the offending code can be
fixed pre-merge (provided test coverage) is to wrap transaction
initiation and locally monkey patch (via contextlib) the eventlet
methods responsible for yielding.  Detection could then be communicated
via an exception or logging.

Eventlet methods to target:

greenthread.getcurrent().switch()
eventlet.hubs.get_hub.switch()

Initially, the goal would be to serialize all db transactions.  A
refinement would be to serialize only those db transactions that held
locks.

** Affects: neutron
 Importance: High
 Assignee: Maru Newby (maru)
 Status: New

** Changed in: neutron
   Importance: Undecided => High

** Description changed:

  Whenever an eventlet yield occurs in a db transaction in which one or
  more db locks are held, the potential for deadlock exists.
  
  Yields can be triggered by:
-   a. network IO (e.g. a plugin calling out to another application)
-   b. lock contention (e.g. when code attempts and fails to acquire a 
semaphore, RLock, etc)
+   a. network IO (e.g. a plugin calling out to another application)
+   b. lock contention (e.g. when code attempts and fails to acquire a 
semaphore, RLock, etc)
  
  It's not always obvious when reviewing code whether either trigger is
  possible inside a db transaction.  In some cases, neither submitter nor
  reviewer will be aware that a 3rd party library being used inside a
  transaction relies upon a locking primitive (e.g. logging uses RLock
  internally).  In other cases, it won't always be obvious from the patch
  that a method being changed will be wrapped in a transaction (e.g. an
  ML2 driver that implements a *_precommit method).
  
- A suggested way of detecting yields so that they the offending code can
- be fixed pre-merge (provided test coverage) is to wrap transaction
+ A suggested way of detecting yields so that the offending code can be
+ fixed pre-merge (provided test coverage) is to wrap transaction
  initiation and locally monkey patch (via contextlib) the eventlet
- methods responsible for yielding.  Detection could be communicated via
- an exception or via logging as desired.
+ methods responsible for yielding.  Detection could then be communicated
+ via an exception or logging.
  
  Eventlet methods to target:
  
  greenthread.getcurrent().switch()
  eventlet.hubs.get_hub.switch()
  
  Initially, the goal would be to serialize all db transactions.  A
  refinement would be to serialize only those db transactions that held
  locks.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1294942

Title:
  eventlet should not yield inside db transactions that hold locks

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Whenever an eventlet yield occurs in a db transaction in which one or
  more db locks are held, the potential for deadlock exists.

  Yields can be triggered by:
    a. network IO (e.g. a plugin calling out to another application)
    b. lock contention (e.g. when code attempts and fails to acquire a 
semaphore, RLock, etc)

  It's not always obvious when reviewing code whether either trigger is
  possible inside a db transaction.  In some cases, neither submitter
  nor reviewer will be aware that a 3rd party library being used inside
  a transaction relies upon a locking primitive (e.g. logging uses RLock
  internally).  In other cases, it won't always be obvious from the
  patch that a method being changed will be wrapped in a transaction
  (e.g. an ML2 driver that implements a *_precommit method).

  A suggested way of detecting yields so that the offending code can be
  fixed pre-merge (provided test coverage) is to wrap transaction
  initiation and locally monkey patch (via contextlib) the eventlet
  methods responsible for yielding.  Detection could then be
  communicated via an exception or logging.

  Eventlet methods to target:

  greenthread.getcurrent().switch()
  eventlet.hubs.get_hub.switch()

  Initially, the goal would be to serialize all db transactions.  A
  refinement would be to serialize only

[Yahoo-eng-team] [Bug 1294920] [NEW] delete a net reports wrong information in nova-network

2014-03-19 Thread jichencom
Public bug reported:

we should handle NetworkInUse exception in api layer

[root@controller ~]# nova net-delete be95de64-a2aa-42de-a522-37802cdbe133
ERROR: The server has either erred or is incapable of performing the requested 
operation. (HTTP 500) (Request-ID: req-14462746-5ab9-4eec-819f-d330222a66a0)

api log:

2014-03-19 02:26:11.856 12319 ERROR nova.api.openstack
[req-14462746-5ab9-4eec-819f-d330222a66a0
d6779a827003465db2d3c52fe135d926 45210fba73d24dd681dc5c292c6b1e7f]
Caught error: Network 1 is still in use.

File "/usr/lib/python2.6/site-packages/nova/db/sqlalchemy/api.py", line 2483, 
in network_delete_safe
raise exception.NetworkInUse(network_id=network_id)

** Affects: nova
 Importance: Undecided
 Assignee: jichencom (jichenjc)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => jichencom (jichenjc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1294920

Title:
  delete a net reports wrong information in nova-network

Status in OpenStack Compute (Nova):
  New

Bug description:
  we should handle NetworkInUse exception in api layer

  [root@controller ~]# nova net-delete be95de64-a2aa-42de-a522-37802cdbe133
  ERROR: The server has either erred or is incapable of performing the 
requested operation. (HTTP 500) (Request-ID: 
req-14462746-5ab9-4eec-819f-d330222a66a0)

  api log:

  2014-03-19 02:26:11.856 12319 ERROR nova.api.openstack
  [req-14462746-5ab9-4eec-819f-d330222a66a0
  d6779a827003465db2d3c52fe135d926 45210fba73d24dd681dc5c292c6b1e7f]
  Caught error: Network 1 is still in use.

  File "/usr/lib/python2.6/site-packages/nova/db/sqlalchemy/api.py", line 2483, 
in network_delete_safe
  raise exception.NetworkInUse(network_id=network_id)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1294920/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294914] [NEW] Unneeded call to network_api on rebuild_instance

2014-03-19 Thread Aaron Rosen
Public bug reported:

When rebuilding an instance we call the network_api and which results in
calling neutron and updating the info cache. We do not actually need to
do this as we can get the nw_info directly from the instance's info
cache.

** Affects: nova
 Importance: Undecided
 Assignee: Aaron Rosen (arosen)
 Status: New


** Tags: icehouse-rc-potential network

** Changed in: nova
 Assignee: (unassigned) => Aaron Rosen (arosen)

** Tags added: icehouse-rc-potential network

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1294914

Title:
  Unneeded call to network_api on rebuild_instance

Status in OpenStack Compute (Nova):
  New

Bug description:
  When rebuilding an instance we call the network_api and which results
  in calling neutron and updating the info cache. We do not actually
  need to do this as we can get the nw_info directly from the instance's
  info cache.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1294914/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291396] Re: No exact match of nodes with flavors

2014-03-19 Thread Rohan
Actually, i was about to add the Nova project to this.

I was already working on adding filters to Nova itself, nothing in
Ironic

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) => Rohan (kanaderohan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291396

Title:
  No exact match of nodes with flavors

Status in Ironic (Bare Metal Provisioning):
  Triaged
Status in OpenStack Compute (Nova):
  New
Status in tripleo - openstack on openstack:
  In Progress

Bug description:
  Seems like we are not doing exact match and we never did.

  http://paste.openstack.org/show/73250/

  Confirmed with Lucas Gomes that it's not in Ironic nor Nova BM.

  Right now we are using default nova filters that takes => resources
  with RamWeight=1.

  We probably need to write new filters that will implement exact match
  and list them in nova.conf of nova image-element.

  filters docs
  http://docs.openstack.org/developer/nova/devref/filter_scheduler.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1291396/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294900] [NEW] ML2 Cisco Nexus MD: Support portchannel interfaces

2014-03-19 Thread Rich Curran
Public bug reported:

Port of port-channel interface support that was implemented for the cisco core 
plugin.
https://review.openstack.org/#/c/42037

** Affects: neutron
 Importance: Undecided
 Assignee: Rich Curran (rcurran)
 Status: New


** Tags: cisco ml2

** Changed in: neutron
 Assignee: (unassigned) => Rich Curran (rcurran)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1294900

Title:
  ML2 Cisco Nexus MD: Support portchannel interfaces

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Port of port-channel interface support that was implemented for the cisco 
core plugin.
  https://review.openstack.org/#/c/42037

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1294900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294892] [NEW] Debug logging for DHCP agent config files

2014-03-19 Thread Ian Wienand
Public bug reported:

We've been seeing things that appear to be races between the hosts files
being written out for dnsmasq and dhcp requests coming in. We will get
occasional errors from dnsmasq saying "no address available", "duplicate
IP address" but by the time you look, the corresponding host file has
long since been replaced.

Outputting the dnsmsaq config file in the debug logs would help in
establishing what the DHCP server state was at the time of the problem

** Affects: neutron
 Importance: Undecided
 Assignee: Ian Wienand (iwienand)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1294892

Title:
  Debug logging for DHCP agent config files

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  We've been seeing things that appear to be races between the hosts
  files being written out for dnsmasq and dhcp requests coming in. We
  will get occasional errors from dnsmasq saying "no address available",
  "duplicate IP address" but by the time you look, the corresponding
  host file has long since been replaced.

  Outputting the dnsmsaq config file in the debug logs would help in
  establishing what the DHCP server state was at the time of the problem

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1294892/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294886] [NEW] Unneeded call to network_api on detach_interface

2014-03-19 Thread Aaron Rosen
Public bug reported:

When detaching an interface from an instance we call the network_api and
which results in calling neutron and updating the info cache. We do not
actually need to do this as we can get the nw_info directly from the
instance's info cache.

** Affects: nova
 Importance: High
 Assignee: Aaron Rosen (arosen)
 Status: New

** Changed in: nova
   Importance: Undecided => High

** Changed in: nova
 Assignee: (unassigned) => Aaron Rosen (arosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1294886

Title:
  Unneeded call to network_api on detach_interface

Status in OpenStack Compute (Nova):
  New

Bug description:
  When detaching an interface from an instance we call the network_api
  and which results in calling neutron and updating the info cache. We
  do not actually need to do this as we can get the nw_info directly
  from the instance's info cache.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1294886/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1293540] Re: nova should make sure the bridge exists before resuming a VM after an offline snapshot

2014-03-19 Thread Aaron Rosen
** Also affects: neutron
   Importance: Undecided
   Status: New

** Tags removed: low-hanging-fruit
** Tags added: network

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1293540

Title:
  nova should make sure the bridge exists before resuming a VM after an
  offline snapshot

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  My setup is based on icehouse-2, KVM, Neutron setup with ML2 and the linux 
bridge agent, CentOS 6.5 and LVM as the ephemeral backend.
  The OS should not matter in this, LVM should not matter either, just make 
sure the snapshot takes the VM offline.

  How to reproduce:
  1. create one VM on a compute node (make sure only one VM is present).
  2. snapshot the VM (offline).
  3. linux bridge removes the tap interface from the bridge and decides to 
remove the bridge also since there are no other interfaces present.
  4. nova tries to resume the VM and fails since no bridge is present (libvirt 
error, can't get the bridge MTU).

  Side question:
  Why do both neutron and nova deal with the bridge ?
  I can understand the need to remove empty bridges but I believe nova should 
be the one to do it if nova is dealing mainly with the bridge itself.

  More information:

  During the snapshot Neutron (linux bridge) is called:
  (neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent)
  treat_devices_removed is called and removes the tap interface and calls 
self.br_mgr.remove_empty_bridges

  On resume:
  nova/virt/libvirt/driver.py in the snapshot method fails at:
  if CONF.libvirt.virt_type != 'lxc' and not live_snapshot:
  if state == power_state.RUNNING:
  new_dom = self._create_domain(domain=virt_dom)

  Having more than one VM on the same bridge works fine since neutron
  (the linux bridge agent) only removes an empty bridge.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1293540/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294862] [NEW] Token expiration time with memcache->kvs->dogpile is wrong

2014-03-19 Thread Dag Stenstad
Public bug reported:

There seems to be a bug somewhere when creating the expiration field for
tokens when using the new memcached->kvs->dogpile->memcached storage for
tokens.

Aystems are UTC+1 (HW clock, TZ Europe/Oslo), and "[token] expiration =
3600" in configuration, wich is the default.

No requests to any API services (except keystone) worked, all systems
reported that token is expired.

Put in some debugging, and it seems the expiration is set to UTC +
conf.token.expiration, wich in my case actually is the same time as
now().

Setting expiration to a higher value than 3600 makes the token valid.

** Affects: keystone
 Importance: Medium
 Assignee: Morgan Fainberg (mdrnstm)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1294862

Title:
  Token expiration time with memcache->kvs->dogpile is wrong

Status in OpenStack Identity (Keystone):
  New

Bug description:
  There seems to be a bug somewhere when creating the expiration field
  for tokens when using the new memcached->kvs->dogpile->memcached
  storage for tokens.

  Aystems are UTC+1 (HW clock, TZ Europe/Oslo), and "[token] expiration
  = 3600" in configuration, wich is the default.

  No requests to any API services (except keystone) worked, all systems
  reported that token is expired.

  Put in some debugging, and it seems the expiration is set to UTC +
  conf.token.expiration, wich in my case actually is the same time as
  now().

  Setting expiration to a higher value than 3600 makes the token valid.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1294862/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294773] Re: tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern - HTTP 500

2014-03-19 Thread David Kranz
This happens quite a lot and seems to be triggered by this error in the
n-cpu log:

2014-03-19 16:46:40.209 ERROR nova.network.neutronv2.api [req-
6afb4d61-2c01-43d7-9caf-fdda126f7497 TestVolumeBootPatternV2-1956299643
TestVolumeBootPatternV2-224738568] Failed to delete neutron port
914b04aa-7f0e-4551-a1e9-2f9acc890409

I will start with the assumption that this is a neutron issue.

** Changed in: neutron
   Status: New => Confirmed

** Changed in: tempest
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1294773

Title:
  
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern
  - HTTP 500

Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in Tempest:
  Invalid

Bug description:
  Example: http://logs.openstack.org/16/79816/3/check/check-tempest-
  dsvm-neutron/7a4eef5/console.html

  2014-03-19 16:48:18.200 | 
==
  2014-03-19 16:48:18.200 | FAIL: tearDownClass 
(tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2)
  2014-03-19 16:48:18.201 | tearDownClass 
(tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2)
  2014-03-19 16:48:18.201 | 
--
  2014-03-19 16:48:18.201 | _StringException: Traceback (most recent call last):
  2014-03-19 16:48:18.201 |   File "tempest/scenario/manager.py", line 149, in 
tearDownClass
  2014-03-19 16:48:18.201 | cls.cleanup_resource(thing, cls.__name__)
  2014-03-19 16:48:18.201 |   File "tempest/scenario/manager.py", line 113, in 
cleanup_resource
  2014-03-19 16:48:18.202 | resource.delete()
  2014-03-19 16:48:18.202 |   File 
"/opt/stack/new/python-novaclient/novaclient/v1_1/floating_ips.py", line 25, in 
delete
  2014-03-19 16:48:18.202 | self.manager.delete(self)
  2014-03-19 16:48:18.202 |   File 
"/opt/stack/new/python-novaclient/novaclient/v1_1/floating_ips.py", line 49, in 
delete
  2014-03-19 16:48:18.202 | self._delete("/os-floating-ips/%s" % 
base.getid(floating_ip))
  2014-03-19 16:48:18.202 |   File 
"/opt/stack/new/python-novaclient/novaclient/base.py", line 161, in _delete
  2014-03-19 16:48:18.202 | _resp, _body = self.api.client.delete(url)
  2014-03-19 16:48:18.203 |   File 
"/opt/stack/new/python-novaclient/novaclient/client.py", line 292, in delete
  2014-03-19 16:48:18.203 | return self._cs_request(url, 'DELETE', **kwargs)
  2014-03-19 16:48:18.203 |   File 
"/opt/stack/new/python-novaclient/novaclient/client.py", line 260, in 
_cs_request
  2014-03-19 16:48:18.203 | **kwargs)
  2014-03-19 16:48:18.203 |   File 
"/opt/stack/new/python-novaclient/novaclient/client.py", line 242, in 
_time_request
  2014-03-19 16:48:18.203 | resp, body = self.request(url, method, **kwargs)
  2014-03-19 16:48:18.203 |   File 
"/opt/stack/new/python-novaclient/novaclient/client.py", line 236, in request
  2014-03-19 16:48:18.204 | raise exceptions.from_response(resp, body, url, 
method)
  2014-03-19 16:48:18.204 | ClientException: The server has either erred or is 
incapable of performing the requested operation. (HTTP 500) (Request-ID: 
req-7d345883-b8db-4081-a643-7aa9169a95b6)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1294773/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294853] [NEW] service_get_all in nova.compute.api should return a List object and should not do a filtering

2014-03-19 Thread Santiago Baldassin
Public bug reported:

service_get_all is filtering the results returned by the service object
and returning an array. This api should return a List object instead and
the filtering should be done in the sqlalchemy api

** Affects: nova
 Importance: Undecided
 Assignee: Santiago Baldassin (santiago-b-baldassin)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Santiago Baldassin (santiago-b-baldassin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1294853

Title:
  service_get_all in nova.compute.api should return a List object and
  should not do a filtering

Status in OpenStack Compute (Nova):
  New

Bug description:
  service_get_all is filtering the results returned by the service
  object and returning an array. This api should return a List object
  instead and the filtering should be done in the sqlalchemy api

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1294853/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262529] Re: Floating IP takes too long to update in nova and even longer for multiple VMs

2014-03-19 Thread Mauro Sergio Martins Rodrigues
** Changed in: tempest
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1262529

Title:
  Floating IP takes too long to update in nova and even longer for
  multiple VMs

Status in devstack - openstack dev environments:
  New
Status in OpenStack Neutron (virtual network service):
  Triaged
Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Invalid

Bug description:
  Associating Floating IP with neutron takes too long to show up in VM's
  details ('nova show' or 'compute_client.servers.get()') and even
  longer when there's more than 1 VM involved.

  when launching 2 VMs with floating IP you can see in the log that it passes 
once:
  "unchecked floating IPs: {}"
  and fails
  "Timed out while waiting for the floating IP assignments to propagate"

  
http://logs.openstack.org/01/55101/28/check/check-tempest-dsvm-neutron/0541dff/console.html
  
http://logs.openstack.org/01/55101/28/check/check-tempest-dsvm-neutron/f383f4b/console.html
  
http://logs.openstack.org/01/55101/31/check/check-tempest-dsvm-neutron/321413a/console.html
  
http://logs.openstack.org/97/62697/5/check/check-tempest-dsvm-neutron/960c6ad/console.html

  also - the floating ip is accessible long time before it is updated in
  nova DB

  How to reproduce:
  https://review.openstack.org/#/c/62697/

  So the problem is both:
  1. the time it takes for nova to get the update
  and
  2. the timeout defined in the tempest neutron-gate

  since I don't see this in my local setup (rhos-4.0), I don't know if
  this is due to stress in neutron or nova, or if it's a devstack issue

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1262529/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294763] Re: Cisco havana metaplugin not extending extension aliases from subplugins

2014-03-19 Thread Arvind Somya
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1294763

Title:
  Cisco havana metaplugin not extending extension aliases from
  subplugins

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  The Cisco monolithic plugin in stable/havana is not extending the list
  of _supported_extension_aliases from the subplugins. As a result the
  user gets a 404 when he/she tries to perform operations on any
  extension resources.

  This issue was seen on a RHEL 6.5 deployment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1294763/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1270355] Re: ERROR nova.virt.libvirt.driver virNetMessageFree:XX msg=0x found in error logs of check job

2014-03-19 Thread Matthew Treinish
So our enforcement strategy for tracking failures and traces in the logs
has changed since this was filed and this bug isn't valid any more
because of the change. But this looked liked a whitelist matching issue
when it still applied.

** Changed in: tempest
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1270355

Title:
  ERROR nova.virt.libvirt.driver virNetMessageFree:XX msg=0x
  found in error logs of check job

Status in OpenStack Compute (Nova):
  Invalid
Status in Tempest:
  Invalid

Bug description:
  2014-01-18 02:05:31.528 | Checking logs...
  2014-01-18 02:05:32.316 | Log File: n-cpu
  2014-01-18 02:05:32.317 | 2014-01-18 01:45:00.948 26765 ERROR 
nova.virt.libvirt.driver [-] [instance: 
5fcba897-e4df-4e75-8a63-d08f136a5e0a]2014-01-18 01:45:00.948+: 29523: debug 
: virNetMessageFree:75 : msg=0x7f398c001690 nfds=0 cb=(nil)
  2014-01-18 02:05:32.317 | 
  2014-01-18 02:05:35.968 | Logs have errors
  2014-01-18 02:05:35.968 | FAILED

  See: http://logs.openstack.org/47/65347/6/check/check-tempest-dsvm-
  full/8f71a0b/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1270355/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1292648] Re: cloud-init should check/format Azure empheral disks each boot

2014-03-19 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.5~bzr969-0ubuntu1

---
cloud-init (0.7.5~bzr969-0ubuntu1) trusty; urgency=medium

  * New upstream snapshot.
* Azure: Reformat ephemeral disk if it got re-provisioned
  by the cloud on any reboot (LP: #1292648)
* final_message: fix replacement of upper case keynames (LP: #1286164)
* seed_random: do not capture output.  Correctly provide
  environment variable RANDOM_SEED_FILE to command.
* CloudSigma: support base64 encoded user-data
 -- Scott MoserWed, 19 Mar 2014 14:04:34 -0400

** Changed in: cloud-init (Ubuntu)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1292648

Title:
  cloud-init should check/format Azure empheral disks each boot

Status in Init scripts for use on cloud images:
  Fix Committed
Status in “cloud-init” package in Ubuntu:
  Fix Released

Bug description:
  On Windows Azure, the ephemeral disk should be treated as ephemeral
  per boot, not per instance.

  Microsoft has informed us that under the following conditions an ephemeral 
disk may disappear:
  1. The user resizes the instance
  2. A fault causes the instance to move from one physical host to another
  3. A machine is shutdown and then started again

  Essentially, on Azure, the ephemeral disk is extremely ephemeral.
  Users who hit any of the above situations are discovering that /mnt is
  mount with their default NTFS file system.

  ProblemType: Bug
  DistroRelease: Ubuntu 14.04
  Package: cloud-init 0.7.5~bzr964-0ubuntu1 [modified: 
usr/lib/python2.7/dist-packages/cloudinit/config/cc_disk_setup.py 
usr/lib/python2.7/dist-packages/cloudinit/config/cc_final_message.py 
usr/lib/python2.7/dist-packages/cloudinit/config/cc_seed_random.py 
usr/lib/python2.7/dist-packages/cloudinit/sources/DataSourceAzure.py 
usr/lib/python2.7/dist-packages/cloudinit/sources/DataSourceCloudSigma.py 
usr/lib/python2.7/dist-packages/cloudinit/sources/DataSourceSmartOS.py]
  ProcVersionSignature: Ubuntu 3.13.0-17.37-generic 3.13.6
  Uname: Linux 3.13.0-17-generic x86_64
  ApportVersion: 2.13.3-0ubuntu1
  Architecture: amd64
  Date: Fri Mar 14 17:53:20 2014
  PackageArchitecture: all
  SourcePackage: cloud-init
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1292648/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294781] [NEW] user_id missing from message about federation roles

2014-03-19 Thread Steve Martinelli
Public bug reported:

In the token common provider, the call to _populate_roles_for_groups()
is missing the user_id argument, and it is defaulted to None.

Thus, whenever a user is associated with a group that has no roles on
either scoped project or domain, then the error message will always be:
'User None has no access  to project %(project_id)s'

** Affects: keystone
 Importance: Undecided
 Assignee: Steve Martinelli (stevemar)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1294781

Title:
  user_id missing from message about federation roles

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  In the token common provider, the call to _populate_roles_for_groups()
  is missing the user_id argument, and it is defaulted to None.

  Thus, whenever a user is associated with a group that has no roles on
  either scoped project or domain, then the error message will always
  be: 'User None has no access  to project %(project_id)s'

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1294781/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1292648] Re: cloud-init should check/format Azure empheral disks each boot

2014-03-19 Thread Scott Moser
** Also affects: cloud-init
   Importance: Undecided
   Status: New

** Changed in: cloud-init
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1292648

Title:
  cloud-init should check/format Azure empheral disks each boot

Status in Init scripts for use on cloud images:
  Fix Committed
Status in “cloud-init” package in Ubuntu:
  In Progress

Bug description:
  On Windows Azure, the ephemeral disk should be treated as ephemeral
  per boot, not per instance.

  Microsoft has informed us that under the following conditions an ephemeral 
disk may disappear:
  1. The user resizes the instance
  2. A fault causes the instance to move from one physical host to another
  3. A machine is shutdown and then started again

  Essentially, on Azure, the ephemeral disk is extremely ephemeral.
  Users who hit any of the above situations are discovering that /mnt is
  mount with their default NTFS file system.

  ProblemType: Bug
  DistroRelease: Ubuntu 14.04
  Package: cloud-init 0.7.5~bzr964-0ubuntu1 [modified: 
usr/lib/python2.7/dist-packages/cloudinit/config/cc_disk_setup.py 
usr/lib/python2.7/dist-packages/cloudinit/config/cc_final_message.py 
usr/lib/python2.7/dist-packages/cloudinit/config/cc_seed_random.py 
usr/lib/python2.7/dist-packages/cloudinit/sources/DataSourceAzure.py 
usr/lib/python2.7/dist-packages/cloudinit/sources/DataSourceCloudSigma.py 
usr/lib/python2.7/dist-packages/cloudinit/sources/DataSourceSmartOS.py]
  ProcVersionSignature: Ubuntu 3.13.0-17.37-generic 3.13.6
  Uname: Linux 3.13.0-17-generic x86_64
  ApportVersion: 2.13.3-0ubuntu1
  Architecture: amd64
  Date: Fri Mar 14 17:53:20 2014
  PackageArchitecture: all
  SourcePackage: cloud-init
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1292648/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294756] [NEW] missing test for None in sqlalchemy query filter

2014-03-19 Thread Chris Friesen
Public bug reported:

In db.sqlalchemy.api.instance_get_all_by_filters() there is code that
looks like this:

if not filters.pop('soft_deleted', False):
query_prefix = query_prefix.\
filter(models.Instance.vm_state != vm_states.SOFT_DELETED)


In sqlalchemy a comparison against a non-null value will not match null values, 
so the above filter will not return objects where vm_state is NULL.

The problem is that in the Instance object the "vm_state" field is
declared as nullable.  In many cases "vm_state" will in fact have a
value, but in get_test_instance() in test/utils.py the value of
"vm_state" is not specified.

Given the above, it seems that either we need to configure
"models.Instance.vm_state" as not nullable (and deal with the fallout),
or else we need to update instance_get_all_by_filters() to explicitly
check for None--something like this perhaps:

if not filters.pop('soft_deleted', False):
query_prefix = query_prefix.\
filter(or_(models.Instance.vm_state != vm_states.SOFT_DELETED,
   models.Instance.vm_state == None))

If we want to fix the query, I'll happily submit the updated code.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1294756

Title:
  missing test for None in sqlalchemy query filter

Status in OpenStack Compute (Nova):
  New

Bug description:
  In db.sqlalchemy.api.instance_get_all_by_filters() there is code that
  looks like this:

  if not filters.pop('soft_deleted', False):
  query_prefix = query_prefix.\
  filter(models.Instance.vm_state != vm_states.SOFT_DELETED)

  
  In sqlalchemy a comparison against a non-null value will not match null 
values, so the above filter will not return objects where vm_state is NULL.

  The problem is that in the Instance object the "vm_state" field is
  declared as nullable.  In many cases "vm_state" will in fact have a
  value, but in get_test_instance() in test/utils.py the value of
  "vm_state" is not specified.

  Given the above, it seems that either we need to configure
  "models.Instance.vm_state" as not nullable (and deal with the
  fallout), or else we need to update instance_get_all_by_filters() to
  explicitly check for None--something like this perhaps:

  if not filters.pop('soft_deleted', False):
  query_prefix = query_prefix.\
  filter(or_(models.Instance.vm_state != vm_states.SOFT_DELETED,
 models.Instance.vm_state == None))

  If we want to fix the query, I'll happily submit the updated code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1294756/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294774] [NEW] Remove nova.conf.sample from the tree replace with README.nova.conf

2014-03-19 Thread Tracy Jones
Public bug reported:

We decided not to try to kep nova.conf.sample in sync so we need to
remove it from the tree.  However we should either generate it in
setup.py OR add a README in the sample dir telling people how to
generate it

** Affects: nova
 Importance: Medium
 Assignee: Tracy Jones (tjones-i)
 Status: New

** Changed in: nova
Milestone: None => icehouse-rc1

** Changed in: nova
   Importance: Undecided => Medium

** Changed in: nova
 Assignee: (unassigned) => Tracy Jones (tjones-i)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1294774

Title:
  Remove nova.conf.sample from the tree replace with README.nova.conf

Status in OpenStack Compute (Nova):
  New

Bug description:
  We decided not to try to kep nova.conf.sample in sync so we need to
  remove it from the tree.  However we should either generate it in
  setup.py OR add a README in the sample dir telling people how to
  generate it

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1294774/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294773] [NEW] tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern - HTTP 500

2014-03-19 Thread Davanum Srinivas (DIMS)
Public bug reported:

Example: http://logs.openstack.org/16/79816/3/check/check-tempest-dsvm-
neutron/7a4eef5/console.html

2014-03-19 16:48:18.200 | 
==
2014-03-19 16:48:18.200 | FAIL: tearDownClass 
(tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2)
2014-03-19 16:48:18.201 | tearDownClass 
(tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2)
2014-03-19 16:48:18.201 | 
--
2014-03-19 16:48:18.201 | _StringException: Traceback (most recent call last):
2014-03-19 16:48:18.201 |   File "tempest/scenario/manager.py", line 149, in 
tearDownClass
2014-03-19 16:48:18.201 | cls.cleanup_resource(thing, cls.__name__)
2014-03-19 16:48:18.201 |   File "tempest/scenario/manager.py", line 113, in 
cleanup_resource
2014-03-19 16:48:18.202 | resource.delete()
2014-03-19 16:48:18.202 |   File 
"/opt/stack/new/python-novaclient/novaclient/v1_1/floating_ips.py", line 25, in 
delete
2014-03-19 16:48:18.202 | self.manager.delete(self)
2014-03-19 16:48:18.202 |   File 
"/opt/stack/new/python-novaclient/novaclient/v1_1/floating_ips.py", line 49, in 
delete
2014-03-19 16:48:18.202 | self._delete("/os-floating-ips/%s" % 
base.getid(floating_ip))
2014-03-19 16:48:18.202 |   File 
"/opt/stack/new/python-novaclient/novaclient/base.py", line 161, in _delete
2014-03-19 16:48:18.202 | _resp, _body = self.api.client.delete(url)
2014-03-19 16:48:18.203 |   File 
"/opt/stack/new/python-novaclient/novaclient/client.py", line 292, in delete
2014-03-19 16:48:18.203 | return self._cs_request(url, 'DELETE', **kwargs)
2014-03-19 16:48:18.203 |   File 
"/opt/stack/new/python-novaclient/novaclient/client.py", line 260, in 
_cs_request
2014-03-19 16:48:18.203 | **kwargs)
2014-03-19 16:48:18.203 |   File 
"/opt/stack/new/python-novaclient/novaclient/client.py", line 242, in 
_time_request
2014-03-19 16:48:18.203 | resp, body = self.request(url, method, **kwargs)
2014-03-19 16:48:18.203 |   File 
"/opt/stack/new/python-novaclient/novaclient/client.py", line 236, in request
2014-03-19 16:48:18.204 | raise exceptions.from_response(resp, body, url, 
method)
2014-03-19 16:48:18.204 | ClientException: The server has either erred or is 
incapable of performing the requested operation. (HTTP 500) (Request-ID: 
req-7d345883-b8db-4081-a643-7aa9169a95b6)

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: tempest
 Importance: Undecided
 Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1294773

Title:
  
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern
  - HTTP 500

Status in OpenStack Neutron (virtual network service):
  New
Status in Tempest:
  New

Bug description:
  Example: http://logs.openstack.org/16/79816/3/check/check-tempest-
  dsvm-neutron/7a4eef5/console.html

  2014-03-19 16:48:18.200 | 
==
  2014-03-19 16:48:18.200 | FAIL: tearDownClass 
(tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2)
  2014-03-19 16:48:18.201 | tearDownClass 
(tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2)
  2014-03-19 16:48:18.201 | 
--
  2014-03-19 16:48:18.201 | _StringException: Traceback (most recent call last):
  2014-03-19 16:48:18.201 |   File "tempest/scenario/manager.py", line 149, in 
tearDownClass
  2014-03-19 16:48:18.201 | cls.cleanup_resource(thing, cls.__name__)
  2014-03-19 16:48:18.201 |   File "tempest/scenario/manager.py", line 113, in 
cleanup_resource
  2014-03-19 16:48:18.202 | resource.delete()
  2014-03-19 16:48:18.202 |   File 
"/opt/stack/new/python-novaclient/novaclient/v1_1/floating_ips.py", line 25, in 
delete
  2014-03-19 16:48:18.202 | self.manager.delete(self)
  2014-03-19 16:48:18.202 |   File 
"/opt/stack/new/python-novaclient/novaclient/v1_1/floating_ips.py", line 49, in 
delete
  2014-03-19 16:48:18.202 | self._delete("/os-floating-ips/%s" % 
base.getid(floating_ip))
  2014-03-19 16:48:18.202 |   File 
"/opt/stack/new/python-novaclient/novaclient/base.py", line 161, in _delete
  2014-03-19 16:48:18.202 | _resp, _body = self.api.client.delete(url)
  2014-03-19 16:48:18.203 |   File 
"/opt/stack/new/python-novaclient/novaclient/client.py", line 292, in delete
  2014-03-19 16:48:18.203 | return self._cs_request(url, 'DELETE', **kwargs)
  2014-03-19 16:48:18.203 |   File 
"/opt/stack/new/python-novaclient/novaclient/client.py", line 260, in 
_cs_request
  2014-03-19 16:48:18.203 | **kwargs)
  2014-03-19 16:48:18.203 |   File 
"/opt/stack/new/python-novaclient/novaclient/client.p

[Yahoo-eng-team] [Bug 1294775] [NEW] Image needs to be created before uploading the data bits

2014-03-19 Thread nikhil komawar
Public bug reported:

Currently, we need to have a Image object (or record) to be able to set
it's data bits.

We need an alternative mechanism for Image creation in cases where
failure in uploading data bits should fail Image creation.

Example use case:
- Say an import task fails while uploading data bits to the data store. The 
Image would be stuck in the saving/queued state indefinitely resulting in 
superfluous Image entries in the users Image list.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1294775

Title:
  Image needs to be created before uploading the data bits

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Currently, we need to have a Image object (or record) to be able to
  set it's data bits.

  We need an alternative mechanism for Image creation in cases where
  failure in uploading data bits should fail Image creation.

  Example use case:
  - Say an import task fails while uploading data bits to the data store. The 
Image would be stuck in the saving/queued state indefinitely resulting in 
superfluous Image entries in the users Image list.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1294775/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1269407] Re: Instance Termination delays in updating port list

2014-03-19 Thread Aaron Rosen
Hi Yair,

I looked into this and could reproduce this with your script. The reason
why this occurs is because the events to nova-api are async. When nova
delete  is done the instance isn't actually deleted till sometime after
which removes the port. This is why you are getting the error that
multiple ports exist when you go to delete the subnet. In order to do
this you'd need to loop on nova list/show on that instance till it is
gone. Then you can delete the subnet without a problem.

** Changed in: nova
   Status: New => Incomplete

** Changed in: neutron
   Status: New => Invalid

** Changed in: nova
   Status: Incomplete => Invalid

** Changed in: neutron
 Assignee: Aaron Rosen (arosen) => (unassigned)

** Changed in: nova
 Assignee: Aaron Rosen (arosen) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1269407

Title:
  Instance Termination delays in updating port list

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When deleting an instance, the port list is not updated immediately.
  As a result - deleting net/subnet fails with error:

  409-{u'NeutronError': {u'message': u'Unable to complete operation on
  subnet UUID. One or more ports have an IP allocation from this
  subnet.', u'type': u'SubnetInUse', u'detail': u''}}

  (*) Happens only on automated scripts, since manual input isn't quick
  enough to catch this.

  (**) Happens only when Floating IP is attached - error doesn't happen when 
Floating IP isn't used.
  my guess: Nova delays in deleting the VM while checking with neutron DB that 
Floating IP was deleted.

  (***) nova delete command returns BEFORE instance is deleted:
  > nova delete $instance_id ; nova show $instance_id
  will return the instance without failure

  (*) might also affect Tempest during tearDown execution

  
  Version:
  openstack-nova-api-2014.1-0.5.b1.el6.noarch
  openstack-nova-compute-2014.1-0.5.b1.el6.noarch
  openstack-nova-scheduler-2014.1-0.5.b1.el6.noarch
  openstack-nova-console-2014.1-0.5.b1.el6.noarch
  openstack-nova-conductor-2014.1-0.5.b1.el6.noarch
  openstack-nova-cert-2014.1-0.5.b1.el6.noarch

  python-neutron-2014.1-0.1.b1.el6.noarch
  openstack-neutron-2014.1-0.1.b1.el6.noarch
  openstack-neutron-openvswitch-2014.1-0.1.b1.el6.noarch

  How to reproduce:
   script attached 
  assumes:
  1. external network exists with floating ip range available ("public")
  2. image exists

  Setup:
  1 create network "private"
  2. create subnet
  3. create router:
  3.1 set router gateway to "public"
  3.2 set router interface to "private"
  4. create VM
  5. assign Floating IP to VM

  TearDown
  1. Delete / Disassociate Floating IP
  2. Delete VM
  3. detach router interface from subnet (router-interface-delete)
  4. Delete subnet/net

  Expected Result:
  subnet/net should be successfully deleted.

  Actual Results:
  "Unable to complete operation on subnet UUID. One or more ports have an IP 
allocation from this subnet"

  409-{u'NeutronError': {u'message': u'Unable to complete operation on
  subnet UUID. One or more ports have an IP allocation from this
  subnet.', u'type': u'SubnetInUse', u'detail': u''}}

  script log:

  line 101 - VM port still in port list even though VM was deleted
  line 105 - subnet fails to delete
  line 117 - network successfully deleted after enough time passed for port 
list to update

    1 + EXT_NET_NAME=public
    2 + NET_NAME=my_net
    3 + SUBNET_NAME=my_subnet
    4 + ROUTER_NAME=my_router
    5 + SERVER_NAME=my_server
    6 + IMAGE_NAME='cirros-0.3.1-x86_64-uec '
    7 + MASK=54.0.0
    8 + SERVER_IP=54.0.0.6
    9 ++ neutron net-list
   10 ++ grep public
   11 ++ awk '{print $2;}'
   12 + EXT_NET_ID=200a91cf-5376-4095-8722-2f247ddb01c9
   13 ++ nova image-list
   14 ++ grep -w ' cirros-0.3.1-x86_64-uec  '
   15 ++ awk '{print $2;}'
   16 + IMAGE_ID=1f16b297-aeaa-4fa9-9640-269695b6eb48
   17 ++ grep -w id
   18 ++ neutron net-create my_net
   19 ++ awk '{print $4;}'
   20 + NET_ID=6ec5ef65-5279-4bbd-919a-b45a27bb31cd
   21 ++ neutron subnet-create --name my_subnet 
6ec5ef65-5279-4bbd-919a-b45a27bb31cd 54.0.0.0/24
   22 ++ grep -w id
   23 ++ awk '{print $4;}'
   24 + SUBNET_ID=76abfa0f-938a-4be1-abd5-804af306fa2d
   25 ++ neutron router-create my_router
   26 ++ awk '{print $4;}'
   27 ++ grep -w id
   28 + ROUTER_ID=df211133-0513-44fc-bec5-38f9bca74025
   29 + neutron router-gateway-set df211133-0513-44fc-bec5-38f9bca74025 
200a91cf-5376-4095-8722-2f247ddb01c9
   30 Set gateway for router df211133-0513-44fc-bec5-38f9bca74025
   31 + neutron router-interface-add df211133-0513-44fc-bec5-38f9bca74025 
76abfa0f-938a-4be1-abd5-804af306fa2d
   32 Added interface cafd4161-f840-4c87-a80b-71b0ef374b9e to router 
df211133-0513-44fc-bec5-38f9bca74025.
   33 + nova boot --flavor 2 --image 1f16b297-

[Yahoo-eng-team] [Bug 1294481] Re: nova.conf.sample out of sync

2014-03-19 Thread Tracy Jones
i'm removing this from rc1 based on that and opening a new bug on
removing it, adding a README, or adding it to the packaging step.

** Changed in: nova
Milestone: icehouse-rc1 => None

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1294481

Title:
  nova.conf.sample out of sync

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  $ tools/config/generate_sample.sh -b . -p nova -o etc/nova
  $ git diff
  diff --git a/etc/nova/nova.conf.sample b/etc/nova/nova.conf.sample
  index 3e9bef8..47e98c9 100644
  --- a/etc/nova/nova.conf.sample
  +++ b/etc/nova/nova.conf.sample
  @@ -1962,6 +1962,16 @@
   # Whether to use cow images (boolean value)
   #use_cow_images=true
   
  +# Fail instance boot if vif plugging fails (boolean value)
  +#vif_plugging_is_fatal=true
  +
  +# Number of seconds to wait for neutron vif plugging events to
  +# arrive before continuing or failing (see
  +# vif_plugging_is_fatal). If this is set to zero and
  +# vif_plugging_is_fatal is False, events should not be
  +# expected to arrive at all. (integer value)
  +#vif_plugging_timeout=300
  +
   
   #
   # Options defined in nova.virt.firewall
  @@ -1999,6 +2009,17 @@
   
   
   #
  +# Options defined in nova.virt.imagehandler
  +#
  +
  +# Specifies which image handler extension names to use for
  +# handling images. The first extension in the list which can
  +# handle the image with a suitable location will be used.
  +# (list value)
  +#image_handlers=download
  +
  +
  +#
   # Options defined in nova.virt.images

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1294481/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294763] [NEW] Cisco havana metaplugin not extending extension aliases from subplugins

2014-03-19 Thread Arvind Somya
Public bug reported:

The Cisco monolithic plugin in stable/havana is not extending the list
of _supported_extension_aliases from the subplugins. As a result the
user gets a 404 when he/she tries to perform operations on any extension
resources.

This issue was seen on a RHEL 6.5 deployment.

** Affects: neutron
 Importance: Critical
 Status: New


** Tags: cisco havana-backport-potential

** Tags added: havana-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1294763

Title:
  Cisco havana metaplugin not extending extension aliases from
  subplugins

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The Cisco monolithic plugin in stable/havana is not extending the list
  of _supported_extension_aliases from the subplugins. As a result the
  user gets a 404 when he/she tries to perform operations on any
  extension resources.

  This issue was seen on a RHEL 6.5 deployment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1294763/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1292589] Re: Can't set any default quota values in Horizon

2014-03-19 Thread Sergio Cazzolato
** No longer affects: python-novaclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1292589

Title:
  Can't set any default quota values in Horizon

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Manuals:
  New

Bug description:
  To re-create, log in as admin.  Navigate to Admin/System
  Panel/Defaults.  Click Update Defaults.  Change any quota to a
  reasonable value (or don't change any at all) and click Update
  Defaults.  There will be both a success and error message (there is a
  separate bug about that).  But no changes from the UI will be applied.
  Inside the log there is the following trace info:

  [Fri Mar 14 14:20:30 2014] [error] NotFound: Not found (HTTP 404)
  [Fri Mar 14 14:27:04 2014] [error] Not Found: Not found (HTTP 404)
  [Fri Mar 14 14:27:04 2014] [error] Traceback (most recent call last):
  [Fri Mar 14 14:27:04 2014] [error]   File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/admin/defaults/workflows.py",
 line 93, in handle
  [Fri Mar 14 14:27:04 2014] [error] nova.default_quota_update(request, 
**nova_data)
  [Fri Mar 14 14:27:04 2014] [error]   File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/nova.py",
 line 626, in default_quota_update
  [Fri Mar 14 14:27:04 2014] [error] 
novaclient(request).quota_classes.update(DEFAULT_QUOTA_NAME, **kwargs)
  [Fri Mar 14 14:27:04 2014] [error]   File 
"/usr/lib/python2.6/site-packages/novaclient/v1_1/quota_classes.py", line 44, 
in update
  [Fri Mar 14 14:27:04 2014] [error] 'quota_class_set')
  [Fri Mar 14 14:27:04 2014] [error]   File 
"/usr/lib/python2.6/site-packages/novaclient/base.py", line 165, in _update
  [Fri Mar 14 14:27:04 2014] [error] _resp, body = self.api.client.put(url, 
body=body)
  [Fri Mar 14 14:27:04 2014] [error]   File 
"/usr/lib/python2.6/site-packages/novaclient/client.py", line 289, in put
  [Fri Mar 14 14:27:04 2014] [error] return self._cs_request(url, 'PUT', 
**kwargs)
  [Fri Mar 14 14:27:04 2014] [error]   File 
"/usr/lib/python2.6/site-packages/novaclient/client.py", line 260, in 
_cs_request
  [Fri Mar 14 14:27:04 2014] [error] **kwargs)
  [Fri Mar 14 14:27:04 2014] [error]   File 
"/usr/lib/python2.6/site-packages/novaclient/client.py", line 242, in 
_time_request
  [Fri Mar 14 14:27:04 2014] [error] resp, body = self.request(url, method, 
**kwargs)
  [Fri Mar 14 14:27:04 2014] [error]   File 
"/usr/lib/python2.6/site-packages/novaclient/client.py", line 236, in request
  [Fri Mar 14 14:27:04 2014] [error] raise exceptions.from_response(resp, 
body, url, method)
  [Fri Mar 14 14:27:04 2014] [error] NotFound: Not found (HTTP 404)

  I'm concerned that this has been caused by a change in nova that we haven't 
reflected in horizon
  http://lists.openstack.org/pipermail/openstack-dev/2014-February/027560.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1292589/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291637] Re: memcache client race

2014-03-19 Thread Tracy Jones
** Also affects: keystone
   Importance: Undecided
   Status: New

** Tags added: api

** Changed in: nova
Milestone: None => icehouse-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291637

Title:
  memcache client race

Status in OpenStack Identity (Keystone):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  Nova uses thread-unsafe memcache client objects in multiple threads.
  For instance, nova-api's metadata WSGI server uses the same
  nova.api.metadata.handler.MetadataRequestHandler._cache object for
  every request. A memcache client object is thread unsafe because it
  has a single open socket connection to memcached. Thus the multiple
  threads will read from & write to the same socket fd.

  Keystoneclient has the same bug. See https://bugs.launchpad.net
  /python-keystoneclient/+bug/1289074 for a patch to fix the problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1291637/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294069] Re: XenAPI: Boot from volume without image_ref broken

2014-03-19 Thread Bob Ball
** Project changed: nova => devstack

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) => John Garbutt (johngarbutt)

** Changed in: nova
   Importance: Undecided => High

** Changed in: nova
   Status: New => In Progress

** Changed in: devstack
 Assignee: John Garbutt (johngarbutt) => Bob Ball (bob-ball)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1294069

Title:
  XenAPI: Boot from volume without image_ref broken

Status in devstack - openstack dev environments:
  In Progress
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  https://review.openstack.org/#/c/78194/ changed tempest to clear
  image_ref for some BFV tests - in particular the
  test_volume_boot_pattern

  This now results in a "KeyError: 'disk_format'" exception from Nova
  when using the XenAPI driver.

  http://paste.openstack.org/show/73733/ is a nicer format of the below
  - but might disappear!

  2014-03-18 11:20:07.475 ERROR nova.compute.manager 
[req-82096fe0-921a-4bc1-9c41-d0aafad4c923 TestVolumeBootPattern-581093620 
TestVolumeBootPattern-1800543246] [instance: 
2b047f24-675c-4921-8cf3-85584097f106] Error: 'disk_format'
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106] Traceback (most recent call last):
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106]   File 
"/opt/stack/nova/nova/compute/manager.py", line 1306, in _build_instance
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106] set_access_ip=set_access_ip)
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106]   File 
"/opt/stack/nova/nova/compute/manager.py", line 394, in decorated_function
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106] return function(self, context, *args, 
**kwargs)
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106]   File 
"/opt/stack/nova/nova/compute/manager.py", line 1708, in _spawn
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106] LOG.exception(_('Instance failed to 
spawn'), instance=instance)
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106]   File 
"/opt/stack/nova/nova/openstack/common/excutils.py", line 68, in __exit__
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106] six.reraise(self.type_, self.value, 
self.tb)
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106]   File 
"/opt/stack/nova/nova/compute/manager.py", line 1705, in _spawn
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106] block_device_info)
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106]   File 
"/opt/stack/nova/nova/virt/xenapi/driver.py", line 236, in spawn
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106] admin_password, network_info, 
block_device_info)
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106]   File 
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 357, in spawn
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106] network_info, block_device_info, 
name_label, rescue)
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106]   File 
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 526, in _spawn
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106] 
undo_mgr.rollback_and_reraise(msg=msg, instance=instance)
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106]   File "/opt/stack/nova/nova/utils.py", 
line 812, in rollback_and_reraise
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106] self._rollback()
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106]   File 
"/opt/stack/nova/nova/openstack/common/excutils.py", line 68, in __exit__
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106] six.reraise(self.type_, self.value, 
self.tb)
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106]   File 
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 501, in _spawn
  2014-03-18 11:20:07.475 TRACE nova.compute.mana

[Yahoo-eng-team] [Bug 1294735] [NEW] Disable domain doesn't disable users in the domain

2014-03-19 Thread Haneef Ali
Public bug reported:

If you disable a domain, the users in the domain are not disabled.

** Affects: keystone
 Importance: Undecided
 Status: New

** Summary changed:

- disable domain
+ Disable domain doesn't disable users in the domain

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1294735

Title:
  Disable domain doesn't disable users in the domain

Status in OpenStack Identity (Keystone):
  New

Bug description:
  If you disable a domain, the users in the domain are not disabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1294735/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294737] [NEW] Disable domain doesn't remove domain scoped tokens

2014-03-19 Thread Haneef Ali
Public bug reported:

Disable domain only revokes project scope token. It doesn't revoke
domain scoped tokens

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1294737

Title:
  Disable domain doesn't remove domain scoped tokens

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Disable domain only revokes project scope token. It doesn't revoke
  domain scoped tokens

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1294737/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294720] [NEW] vmware vshield UT sporadic failure in gate

2014-03-19 Thread Kevin Benton
Public bug reported:

Unit test
neutron.tests.unit.vmware.vshield.test_edge_router.ServiceRouterTestCase.test_router_create
sporadically failed in the gate for an unrelated patch.

Patch:
https://review.openstack.org/#/c/81137/

Failure:
http://logs.openstack.org/37/81137/3/check/gate-neutron-python26/10acc29/console.html

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1294720

Title:
  vmware vshield UT sporadic failure in gate

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Unit test
  
neutron.tests.unit.vmware.vshield.test_edge_router.ServiceRouterTestCase.test_router_create
  sporadically failed in the gate for an unrelated patch.

  Patch:
  https://review.openstack.org/#/c/81137/

  Failure:
  
http://logs.openstack.org/37/81137/3/check/gate-neutron-python26/10acc29/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1294720/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294715] [NEW] "Build timed out" gate-neutron-python27

2014-03-19 Thread Kevin Benton
Public bug reported:

Getting build timeout errors in py27 neutron job in the gate.


Log:


2014-03-19 07:34:54.619 | Started by user anonymous
2014-03-19 07:34:54.623 | [EnvInject] - Loading node environment variables.
2014-03-19 07:34:57.861 | Building remotely on bare-precise-hpcloud-az1-2848150 
in workspace /home/jenkins/workspace/gate-neutron-python27
2014-03-19 07:35:15.391 | [gate-neutron-python27] $ /bin/bash -xe 
/tmp/hudson8042698517758670191.sh
2014-03-19 07:35:16.136 | + /usr/local/jenkins/slave_scripts/gerrit-git-prep.sh 
https://review.openstack.org git://git.openstack.org
2014-03-19 07:35:16.138 | Triggered by: https://review.openstack.org/80130
2014-03-19 07:35:16.140 | + [[ ! -e .git ]]
2014-03-19 07:35:16.141 | + ls -a
2014-03-19 07:35:16.141 | .
2014-03-19 07:35:16.141 | ..
2014-03-19 07:35:16.142 | + rm -fr '.[^.]*' '*'
2014-03-19 07:35:16.142 | + '[' -d /opt/git/openstack/neutron/.git ']'
2014-03-19 07:35:16.147 | + git clone file:///opt/git/openstack/neutron .
2014-03-19 07:35:16.160 | Cloning into '.'...
2014-03-19 07:35:29.606 | + git remote set-url origin 
git://git.openstack.org/openstack/neutron
2014-03-19 07:35:29.630 | + git remote update
2014-03-19 07:35:29.633 | Fetching origin
2014-03-19 07:35:31.126 | From git://git.openstack.org/openstack/neutron
2014-03-19 07:35:31.127 |0009e47..e75f485  master -> origin/master
2014-03-19 07:35:31.164 |  * [new branch]  stable/grizzly -> 
origin/stable/grizzly
2014-03-19 07:35:31.165 |  * [new branch]  stable/havana -> 
origin/stable/havana
2014-03-19 07:35:31.165 | + git reset --hard
2014-03-19 07:35:31.488 | HEAD is now at 0009e47 Merge "NSX: Ensure gateway 
devices are usable after upgrade"
2014-03-19 07:35:31.525 | + git clean -x -f -d -q
2014-03-19 07:35:31.526 | + '[' -z '' ']'
2014-03-19 07:35:31.526 | + git fetch 
http://zm02.openstack.org/p/openstack/neutron 
refs/zuul/master/Za222d85c4aca463ba72447bcf43e09b1
2014-03-19 07:35:32.618 | From http://zm02.openstack.org/p/openstack/neutron
2014-03-19 07:35:32.618 |  * branch
refs/zuul/master/Za222d85c4aca463ba72447bcf43e09b1 -> FETCH_HEAD
2014-03-19 07:35:32.619 | + git checkout FETCH_HEAD
2014-03-19 07:35:32.657 | Note: checking out 'FETCH_HEAD'.
2014-03-19 07:35:32.657 | 
2014-03-19 07:35:32.657 | You are in 'detached HEAD' state. You can look 
around, make experimental
2014-03-19 07:35:32.658 | changes and commit them, and you can discard any 
commits you make in this
2014-03-19 07:35:32.658 | state without impacting any branches by performing 
another checkout.
2014-03-19 07:35:32.658 | 
2014-03-19 07:35:32.658 | If you want to create a new branch to retain commits 
you create, you may
2014-03-19 07:35:32.658 | do so (now or later) by using -b with the checkout 
command again. Example:
2014-03-19 07:35:32.658 | 
2014-03-19 07:35:32.658 |   git checkout -b new_branch_name
2014-03-19 07:35:32.658 | 
2014-03-19 07:35:32.659 | HEAD is now at b72c2a1... Merge commit 
'refs/changes/30/80130/1' of ssh://review.openstack.org:29418/openstack/neutron 
into HEAD
2014-03-19 07:35:32.659 | + git reset --hard FETCH_HEAD
2014-03-19 07:35:32.660 | HEAD is now at b72c2a1 Merge commit 
'refs/changes/30/80130/1' of ssh://review.openstack.org:29418/openstack/neutron 
into HEAD
2014-03-19 07:35:32.661 | + git clean -x -f -d -q
2014-03-19 07:35:32.698 | + '[' -f .gitmodules ']'
2014-03-19 07:35:33.257 | [gate-neutron-python27] $ /bin/bash -xe 
/tmp/hudson3752183789184546031.sh
2014-03-19 07:35:33.264 | + /usr/local/jenkins/slave_scripts/run-unittests.sh 
27 openstack neutron
2014-03-19 07:35:33.267 | + version=27
2014-03-19 07:35:33.268 | + org=openstack
2014-03-19 07:35:33.268 | + project=neutron
2014-03-19 07:35:33.268 | + source /usr/local/jenkins/slave_scripts/functions.sh
2014-03-19 07:35:33.268 | + check_variable_version_org_project 27 openstack 
neutron /usr/local/jenkins/slave_scripts/run-unittests.sh
2014-03-19 07:35:33.269 | + version=27
2014-03-19 07:35:33.269 | + org=openstack
2014-03-19 07:35:33.269 | + project=neutron
2014-03-19 07:35:33.269 | + 
filename=/usr/local/jenkins/slave_scripts/run-unittests.sh
2014-03-19 07:35:33.269 | + [[ -z 27 ]]
2014-03-19 07:35:33.270 | + [[ -z openstack ]]
2014-03-19 07:35:33.270 | + [[ -z neutron ]]
2014-03-19 07:35:33.270 | + venv=py27
2014-03-19 07:35:33.270 | + export NOSE_WITH_XUNIT=1
2014-03-19 07:35:33.271 | + NOSE_WITH_XUNIT=1
2014-03-19 07:35:33.271 | + export NOSE_WITH_HTML_OUTPUT=1
2014-03-19 07:35:33.271 | + NOSE_WITH_HTML_OUTPUT=1
2014-03-19 07:35:33.271 | + export NOSE_HTML_OUT_FILE=nose_results.html
2014-03-19 07:35:33.271 | + NOSE_HTML_OUT_FILE=nose_results.html
2014-03-19 07:35:33.272 | ++ /bin/mktemp -d
2014-03-19 07:35:33.272 | + export TMPDIR=/tmp/tmp.rV65vaO65x
2014-03-19 07:35:33.272 | + TMPDIR=/tmp/tmp.rV65vaO65x
2014-03-19 07:35:33.273 | + trap 'rm -rf /tmp/tmp.rV65vaO65x' EXIT
2014-03-19 07:35:33.273 | + 
/usr/local/jenkins/slave_scripts/jenkins-oom-grep.sh pre
2014-03-19 07:35:33.282 | + sudo 
/usr/l

[Yahoo-eng-team] [Bug 1291926] Re: MismatchError test_update_all_quota_resources_for_tenant

2014-03-19 Thread OpenStack Infra
*** This bug is a duplicate of bug 1291162 ***
https://bugs.launchpad.net/bugs/1291162

Reviewed:  https://review.openstack.org/81503
Committed: 
https://git.openstack.org/cgit/openstack/tempest/commit/?id=27a8c5641d4d419e6e67ed22333a159527356760
Submitter: Jenkins
Branch:master

commit 27a8c5641d4d419e6e67ed22333a159527356760
Author: Sean Dague 
Date:   Wed Mar 19 07:46:42 2014 -0400

fix cinder quota equality

The cinder quota test was of bad quality, and assumed that all
the functions in the class ran in linear order to work. If they
run in a different order the tenant could have additional quota
values beyond the strict defaults.

We can fix this by testing the returned quota contains the values
we're attempting to update.

Closes-Bug: #1291926

Change-Id: I53a154aac61368b7c20ac5703f3877fcf42f9781


** Changed in: tempest
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1291926

Title:
  MismatchError test_update_all_quota_resources_for_tenant

Status in Cinder:
  Invalid
Status in OpenStack Neutron (virtual network service):
  Invalid
Status in Tempest:
  Fix Released

Bug description:
  Observed in the Neutron full job

  2014-03-13 02:08:54.390 | Traceback (most recent call last):
  2014-03-13 02:08:54.390 |   File 
"tempest/api/volume/admin/test_volume_quotas.py", line 67, in 
test_update_all_quota_resources_for_tenant
  2014-03-13 02:08:54.390 | self.assertEqual(new_quota_set, quota_set)
  2014-03-13 02:08:54.391 |   File 
"/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 321, in 
assertEqual
  2014-03-13 02:08:54.391 | self.assertThat(observed, matcher, message)
  2014-03-13 02:08:54.391 |   File 
"/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 406, in 
assertThat
  2014-03-13 02:08:54.392 | raise mismatch_error
  2014-03-13 02:08:54.392 | MismatchError: !=:
  2014-03-13 02:08:54.392 | reference = {'gigabytes': 1009, 'snapshots': 11, 
'volumes': 11}
  2014-03-13 02:08:54.393 | actual= {'gigabytes': 1009,
  2014-03-13 02:08:54.393 |  'gigabytes_volume-type--928001277': -1,
  2014-03-13 02:08:54.393 |  'snapshots': 11,
  2014-03-13 02:08:54.394 |  'snapshots_volume-type--928001277': -1,
  2014-03-13 02:08:54.394 |  'volumes': 11,
  2014-03-13 02:08:54.394 |  'volumes_volume-type--928001277': -1}

  Example here http://logs.openstack.org/82/67382/3/check/check-tempest-
  dsvm-neutron-full/8ffd266/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1291926/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294682] [NEW] Hyper-v resizing issues when using differencing images

2014-03-19 Thread Petrut Lucian
Public bug reported:

When spawning a new VM using cow images, the root image won't get the
size of the flavor. Instead, it will have the size of the base image no
resize is attempted. In this case, the proper size should be specified
when creating the differencing image.

If after that one attempts to resize the VM, the
get_internal_vhd_size_by_file_size method will raise an exception as it
can't get this info out of the differencing image. Instead of raising
this exception, this method may recurse by calling itself on the parent
of the differencing image. This happens on both V1 and V2 namespaces.

Trace: http://paste.openstack.org/show/73825/

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: hyper-v

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1294682

Title:
  Hyper-v resizing issues when using differencing images

Status in OpenStack Compute (Nova):
  New

Bug description:
  When spawning a new VM using cow images, the root image won't get the
  size of the flavor. Instead, it will have the size of the base image
  no resize is attempted. In this case, the proper size should be
  specified when creating the differencing image.

  If after that one attempts to resize the VM, the
  get_internal_vhd_size_by_file_size method will raise an exception as
  it can't get this info out of the differencing image. Instead of
  raising this exception, this method may recurse by calling itself on
  the parent of the differencing image. This happens on both V1 and V2
  namespaces.

  Trace: http://paste.openstack.org/show/73825/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1294682/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294668] [NEW] Overview usage data not filtered by default dates

2014-03-19 Thread Thiago Paiva Brito
Public bug reported:

When one opens the project overview page, the date selectors are set to
the first day of month to the current date.

What should be happening:
Usage data should be filtered by default by those default dates.

What is currently happening:
Usage data shown is only the usage of the current day.

It is important to fix this since this can lead a user to wrong
assumptions regarding project usage.

Obs.: This bug can be verified using the a fix that add the default date
filters set on the page (screenshot attached):
https://review.openstack.org/#/c/79688/

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "Screenshot"
   
https://bugs.launchpad.net/bugs/1294668/+attachment/4032359/+files/Captura%20de%20tela%20de%202014-03-19%2010%3A43%3A52.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1294668

Title:
  Overview usage data not filtered by default dates

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When one opens the project overview page, the date selectors are set
  to the first day of month to the current date.

  What should be happening:
  Usage data should be filtered by default by those default dates.

  What is currently happening:
  Usage data shown is only the usage of the current day.

  It is important to fix this since this can lead a user to wrong
  assumptions regarding project usage.

  Obs.: This bug can be verified using the a fix that add the default
  date filters set on the page (screenshot attached):
  https://review.openstack.org/#/c/79688/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1294668/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1148165] Re: bad request syntax in metadata server

2014-03-19 Thread Scott Moser
fix released in 0.3.2

** Changed in: cirros
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1148165

Title:
  bad request syntax in metadata server

Status in CirrOS a tiny cloud guest:
  Fix Released
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When spinning up a VM in latest Devstack (G3 timeframe), with debug
  logging turned off I get several 'Bad request syntax' error messages.

  This may actually be a problem in cloud-init.

  2013-03-06 00:58:21.176 6742 INFO nova.metadata.wsgi.server [-] (6742)
  accepted ('10.0.0.2', 53364)

  10.0.0.2 - - [06/Mar/2013 00:58:21] code 400, message Bad request syntax 
('GET /2009-04-04/meta-data// /openssh-key HTTP/1.1')
  10.0.0.2 - - [06/Mar/2013 00:58:21] "GET /2009-04-04/meta-data// 
/openssh-key HTTP/1.1" 400 -
  2013-03-06 00:58:21.369 6742 INFO nova.metadata.wsgi.server [-] (6742) 
accepted ('10.0.0.2', 53365)

  10.0.0.2 - - [06/Mar/2013 00:58:21] code 400, message Bad request syntax 
('GET /2009-04-04/meta-data//  404 Not Found/openssh-key 
HTTP/1.1')
  10.0.0.2 - - [06/Mar/2013 00:58:21] "GET /2009-04-04/meta-data//  404 
Not Found/openssh-key HTTP/1.1" 400 -
  2013-03-06 00:58:21.574 6742 INFO nova.metadata.wsgi.server [-] (6742) 
accepted ('10.0.0.2', 53366)

  10.0.0.2 - - [06/Mar/2013 00:58:21] code 400, message Bad request syntax 
('GET /2009-04-04/meta-data// /openssh-key HTTP/1.1')
  10.0.0.2 - - [06/Mar/2013 00:58:21] "GET /2009-04-04/meta-data// 
/openssh-key HTTP/1.1" 400 -
  2013-03-06 00:58:21.790 6742 INFO nova.metadata.wsgi.server [-] (6742) 
accepted ('10.0.0.2', 53367)

  10.0.0.2 - - [06/Mar/2013 00:58:21] code 400, message Bad request syntax 
('GET /2009-04-04/meta-data// /openssh-key HTTP/1.1')
  10.0.0.2 - - [06/Mar/2013 00:58:21] "GET /2009-04-04/meta-data// 
/openssh-key HTTP/1.1" 400 -
  2013-03-06 00:58:34.748 6742 INFO nova.metadata.wsgi.server [-] (6742) 
accepted ('10.0.0.2', 53368)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cirros/+bug/1148165/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294653] [NEW] havana (2013.2) prepare breaks due to verision confict on pbr

2014-03-19 Thread Ivan Melnikov
Public bug reported:

When anvil master is used to install OpenStack from havana-2013.2.yaml,
prepare action fails on download stage. Specifically, egg_info on
hacking older hacking version is not run because of version conflict on
pbr (it requires pbr < 0.6, while at that moment we already have pbr 0.7
in anvil's virtualenv).

Newer pbr was probabliy brought by keystoneclient and glanceclient from
optional-requirements.txt.

** Affects: anvil
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1294653

Title:
  havana (2013.2) prepare breaks due to verision confict on pbr

Status in ANVIL for forging OpenStack.:
  New

Bug description:
  When anvil master is used to install OpenStack from
  havana-2013.2.yaml, prepare action fails on download stage.
  Specifically, egg_info on hacking older hacking version is not run
  because of version conflict on pbr (it requires pbr < 0.6, while at
  that moment we already have pbr 0.7 in anvil's virtualenv).

  Newer pbr was probabliy brought by keystoneclient and glanceclient
  from optional-requirements.txt.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1294653/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294511] Re: test_aggregate_add_host_create_server_with_az fails with remote compute connection scenario

2014-03-19 Thread Mauro Sergio Martins Rodrigues
Thanks for your investigation!

Can you provide more information about your setup? And the logs of your
run?

So currently I have no deploy with multiple nodes, so I  ask: does it
happen all the time?


** Changed in: tempest
   Status: New => Incomplete

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1294511

Title:
  test_aggregate_add_host_create_server_with_az fails with remote
  compute connection scenario

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Incomplete

Bug description:
  Problem:
  If it is not all in one environment, it is the controller node connecting 
with remote nova compute node. It fails to run tempest test case of 
test_aggregate_add_host_create_server_with_az when create server with az, the 
server created with error status as below.

  {"message": "NV-67B7376 No valid host was found. ", "code": 500,
  "details": "  File \"/usr/lib/python2.6/site-
  packages/nova/scheduler/filter_scheduler.py\", line 108, in
  schedule_run_instance

  Basic investigation:

  Since the code logic is to add the host of nova compute which is the
  same of controller node as default. Above scenario is the compute node
  is not the same with controller, it is remote nova compute node, it
  will show "No valid host was found".

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1294511/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1286416] Re: routerl3agentbindings database migration error

2014-03-19 Thread Li Ma
It is fixed in this bug.
https://bugs.launchpad.net/neutron/+bug/1293089

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1286416

Title:
  routerl3agentbindings database migration error

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Currently I'm fixing the bug:
  https://bugs.launchpad.net/neutron/+bug/1230323

  Unfortunately, I hit another bug that the table
  ml2.routerl3agentbindings doesn't support database migration.

  I add a uniqueness to a column in that table, and also write a
  migration script for that modification, however, the devstack shows an
  exception that the table has not been created.

  sqlalchemy.exc.ProgrammingError: (ProgrammingError) (1146, "Table
  'neutron_ml2.routerl3agentbindings' doesn't exist") 'ALTER TABLE
  routerl3agentbindings ADD CONSTRAINT
  uniq_routerl3agentbindings0router_id UNIQUE (router_id)' ()

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1286416/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294603] Re: scenario test_load_balancer_basic fails

2014-03-19 Thread Mauro Sergio Martins Rodrigues
I can't point out what is causing this, anyway, there are several tracebacks 
available in
http://logs.openstack.org/98/81098/2/check/check-tempest-dsvm-neutron-pg/df24b97/logs/screen-q-lbaas.txt.gz

seems to me that one of the ballancers had a problem, although it passed
the connection check, so maybe adding a retry with a timeout at
https://github.com/openstack/tempest/blob/master/tempest/scenario/test_load_balancer_basic.py#L213
will help to solve this issue.

Adding neutron team to see their thoughts

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: tempest
   Status: New => Confirmed

** Changed in: tempest
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1294603

Title:
  scenario test_load_balancer_basic fails

Status in OpenStack Neutron (virtual network service):
  New
Status in Tempest:
  Confirmed

Bug description:
  http://logs.openstack.org/98/81098/2/check/check-tempest-dsvm-neutron-
  pg/df24b97/console.html

  2014-03-19 09:58:15.379 | Traceback (most recent call last):
  2014-03-19 09:58:15.379 |   File "tempest/test.py", line 121, in wrapper
  2014-03-19 09:58:15.379 | return f(self, *func_args, **func_kwargs)
  2014-03-19 09:58:15.379 |   File 
"tempest/scenario/test_load_balancer_basic.py", line 225, in 
test_load_balancer_basic
  2014-03-19 09:58:15.379 | self._check_load_balancing()
  2014-03-19 09:58:15.379 |   File 
"tempest/scenario/test_load_balancer_basic.py", line 213, in 
_check_load_balancing
  2014-03-19 09:58:15.379 | "http://{0}/".format(self.vip_ip)).read())
  2014-03-19 09:58:15.380 |   File "/usr/lib/python2.7/urllib.py", line 86, in 
urlopen
  2014-03-19 09:58:15.380 | return opener.open(url)
  2014-03-19 09:58:15.380 |   File "/usr/lib/python2.7/urllib.py", line 207, in 
open
  2014-03-19 09:58:15.380 | return getattr(self, name)(url)
  2014-03-19 09:58:15.380 |   File "/usr/lib/python2.7/urllib.py", line 345, in 
open_http
  2014-03-19 09:58:15.380 | errcode, errmsg, headers = h.getreply()
  2014-03-19 09:58:15.380 |   File "/usr/lib/python2.7/httplib.py", line 1102, 
in getreply
  2014-03-19 09:58:15.380 | response = self._conn.getresponse()
  2014-03-19 09:58:15.380 |   File "/usr/lib/python2.7/httplib.py", line 1030, 
in getresponse
  2014-03-19 09:58:15.380 | response.begin()
  2014-03-19 09:58:15.380 |   File "/usr/lib/python2.7/httplib.py", line 407, 
in begin
  2014-03-19 09:58:15.381 | version, status, reason = self._read_status()
  2014-03-19 09:58:15.381 |   File "/usr/lib/python2.7/httplib.py", line 365, 
in _read_status
  2014-03-19 09:58:15.381 | line = self.fp.readline()
  2014-03-19 09:58:15.381 |   File "/usr/lib/python2.7/socket.py", line 430, in 
readline
  2014-03-19 09:58:15.381 | data = recv(1)
  2014-03-19 09:58:15.381 | IOError: [Errno socket error] [Errno 104] 
Connection reset by peer

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1294603/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1248757] Re: test_snapshot_pattern fails with paramiko ssh EOFError

2014-03-19 Thread Sean Dague
** No longer affects: glance

** Summary changed:

- test_snapshot_pattern fails with paramiko ssh EOFError
+ test_snapshot_pattern fails because Neutron fails max attempts

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1248757

Title:
  test_snapshot_pattern fails because Neutron fails max attempts

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  I haven't seen this one reported yet (or seen it yet):

  http://logs.openstack.org/55/55455/1/check/check-tempest-devstack-vm-
  neutron/28d1ed7/console.html

  http://paste.openstack.org/show/50561/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1248757/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294597] [NEW] libvirt vif plug_ovs_hybrid should roll back ip link set when fail

2014-03-19 Thread kexiaodong
Public bug reported:

This problem is about [ plug_ovs_hybrid ] in libvirt/vif.py.
In this function, there is not try..catch to roll back "ip link set".

if not linux_net.device_exists(v2_name):
linux_net._create_veth_pair(v1_name, v2_name)
utils.execute('ip', 'link', 'set', br_name, 'up', run_as_root=True)
   <--- this action should roll back when create fail
utils.execute('brctl', 'addif', br_name, v1_name, run_as_root=True)
linux_net.create_ovs_vif_port(self.get_bridge_name(vif),
  v2_name, iface_id, vif['address'],
  instance['uuid'])

In below case, it will result that instance will lost it's virtual 
interface(resume_guests_state_on_host_boot = True) :
1. Create a instance (name vm1) with network, wait until active.
2. Reset the Host OS, and wait until it is ready.
3. Before creating openvswitch bridge, start nova-compute service at the first 
time.
   The init_host function in nova compute manager will call plug_ovs_hybrid, 
and finally raise a exception at create_ovs_vif_port.
   As a result, the nova-compute service will stop, and do not roll back the 
"ip link set" of the instance(vm1).
4. Start openvswitch service, and create bridge for openstack.
5. Restart nova-compute service, and then recreat instances, include vm1.
   When creating vm1, because of do not roll back "ip link set", if will return 
false, and vm1 will lost it's virtual interface.

I think we should roll back ip link set, or set instance status to
ERROR.

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  This problem is about [ plug_ovs_hybrid ] in libvirt/vif.py.
  In this function, there is not try..catch to roll back "ip link set".
  
- if not linux_net.device_exists(v2_name):
- linux_net._create_veth_pair(v1_name, v2_name)
- utils.execute('ip', 'link', 'set', br_name, 'up', 
run_as_root=True)  <--- should roll back here
- utils.execute('brctl', 'addif', br_name, v1_name, 
run_as_root=True)
- linux_net.create_ovs_vif_port(self.get_bridge_name(vif),  
  <--- exception here
-   v2_name, iface_id, vif['address'],
-   instance['uuid'])
+ if not linux_net.device_exists(v2_name):
+ linux_net._create_veth_pair(v1_name, v2_name)
+ utils.execute('ip', 'link', 'set', br_name, 'up', run_as_root=True)  
+<--- this action should roll back when create fail
+ utils.execute('brctl', 'addif', br_name, v1_name, run_as_root=True)
+ linux_net.create_ovs_vif_port(self.get_bridge_name(vif),   
+   v2_name, iface_id, vif['address'],
+   instance['uuid'])
+ 
  
  In below case, it will result that instance will lost it's virtual 
interface(resume_guests_state_on_host_boot = True) :
  1. Create a instance (name vm1) with network, wait until active.
  2. Reset the Host OS, and wait until it is ready.
  3. Before creating openvswitch bridge, start nova-compute service at the 
first time.
 The init_host function in nova compute manager will call plug_ovs_hybrid, 
and raise a exception at create_ovs_vif_port.
-  init_host -> _init_instance (vm1)  -> plug_ovs_hybrid -> 
create_ovs_vif_port
+  init_host -> _init_instance (vm1) -> plug_ovs_hybrid -> 
create_ovs_vif_port
 The nova-compute service will stop, and do not roll back the "ip link set" 
of vm1.
  4. Start openvswitch service, and create bridge for openstack.
  5. Restart nova-compute service, and then recreat instances, include vm1.
 When creating vm1, because of do not roll back "ip link set", if will 
return false, and vm1 will lost it's virtual interface.
  
  I think we should roll back ip link set, or set instance status to
  ERROR.

** Description changed:

  This problem is about [ plug_ovs_hybrid ] in libvirt/vif.py.
  In this function, there is not try..catch to roll back "ip link set".
  
  if not linux_net.device_exists(v2_name):
  linux_net._create_veth_pair(v1_name, v2_name)
- utils.execute('ip', 'link', 'set', br_name, 'up', run_as_root=True)  
+ utils.execute('ip', 'link', 'set', br_name, 'up', run_as_root=True)
 <--- this action should roll back when create fail
  utils.execute('brctl', 'addif', br_name, v1_name, run_as_root=True)
- linux_net.create_ovs_vif_port(self.get_bridge_name(vif),   
+ linux_net.create_ovs_vif_port(self.get_bridge_name(vif),
v2_name, iface_id, vif['address'],
instance['uuid'])
- 
  
  In below case, it will result that instance will lost it's virtual 
interface(resume_guests_state_on_host_boot = True) :
  1. Create a instance (name vm1) with network, wait until active.
  2. Reset the Host OS, and wait until it is ready.
  3. Before creating openvswitch brid

[Yahoo-eng-team] [Bug 1251448] Re: BadRequest: Multiple possible networks found, use a Network ID to be more specific.

2014-03-19 Thread Mark McClain
** Changed in: neutron
Milestone: icehouse-rc1 => None

** Also affects: tempest
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1251448

Title:
  BadRequest: Multiple possible networks found, use a Network ID to be
  more specific.

Status in Tempest:
  New

Bug description:
  Gate (only neutron based) is peridocally failing with the following
  error:

  "BadRequest: Multiple possible networks found, use a Network ID to be
  more specific. "

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiIHBvc3NpYmxlIG5ldHdvcmtzIGZvdW5kLCB1c2UgYSBOZXR3b3JrIElEIHRvIGJlIG1vcmUgc3BlY2lmaWMuIChIVFRQIDQwMClcIiBBTkQgZmlsZW5hbWU6XCJjb25zb2xlLmh0bWxcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNDMyMDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzg0NDY2ODA0Mjg2LCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9

  query:

  message:" possible networks found, use a Network ID to be more
  specific. (HTTP 400)" AND filename:"console.html"

  Example: http://logs.openstack.org/75/54275/3/check/check-tempest-
  devstack-vm-neutron-pg/61a2974/console.html

  Failure breakdown by job:
   check-tempest-devstack-vm-neutron-pg 34%  
   check-tempest-devstack-vm-neutron24%  
   gate-tempest-devstack-vm-neutron 10%  
   gate-tempest-devstack-vm-neutron-pg  5%

To manage notifications about this bug go to:
https://bugs.launchpad.net/tempest/+bug/1251448/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294587] [NEW] Old style Images with vhd files tarred along with a folder are not handled in nova xen plugin

2014-03-19 Thread Sumanth Nagadavalli
Public bug reported:

There are some legacy snapshots which are uploaded into swift, with vhd
files bundled in a folder called "image".

They are in the below format,
image/
snap.vhd
image.vhd


Right now, in glance plugin, after downloading the image, when we try to handle 
old style images, we expected vhd files to be downloaded in the staging_path.

https://github.com/openstack/nova/blob/master/plugins/xenserver/xenapi/etc/xapi.d/plugins/utils.py#167

But in this case of legacy images, there is a folder called "image" with
vhd files, downloaded in the staging path.

So, the level of recursiveness in the downloaded image is not supported
today, while handling old style image.

** Affects: nova
 Importance: Undecided
 Assignee: Sumanth Nagadavalli (sumanth-nagadavalli)
 Status: New


** Tags: nova plugin xen xenserver

** Changed in: nova
 Assignee: (unassigned) => Sumanth Nagadavalli (sumanth-nagadavalli)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1294587

Title:
  Old style Images with vhd files tarred along with a folder are not
  handled in nova xen plugin

Status in OpenStack Compute (Nova):
  New

Bug description:
  There are some legacy snapshots which are uploaded into swift, with
  vhd files bundled in a folder called "image".

  They are in the below format,
  image/
  snap.vhd
  image.vhd

  
  Right now, in glance plugin, after downloading the image, when we try to 
handle old style images, we expected vhd files to be downloaded in the 
staging_path.

  
https://github.com/openstack/nova/blob/master/plugins/xenserver/xenapi/etc/xapi.d/plugins/utils.py#167

  But in this case of legacy images, there is a folder called "image"
  with vhd files, downloaded in the staging path.

  So, the level of recursiveness in the downloaded image is not
  supported today, while handling old style image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1294587/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1118388] Re: Ultra outdated docstring for NvpPlugin

2014-03-19 Thread Mark McClain
This was fixed alongside of other refactoring.

** Changed in: neutron
   Status: In Progress => Invalid

** Changed in: neutron
Milestone: icehouse-rc1 => None

** Changed in: neutron
 Assignee: Sachin Thakkar (sthakkar) => (unassigned)

** Changed in: neutron
   Importance: Low => Undecided

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1118388

Title:
  Ultra outdated docstring for NvpPlugin

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  This sounds so 'Essex'.

  """
  NvpPluginV2 is a Quantum plugin that provides L2 Virtual Network
  functionality using NVP.
  """

  Docstring should be updated reflecting all the features supported by
  the plugin

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1118388/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294568] [NEW] Unable to create the Neutron network net_local because of constraints for db2

2014-03-19 Thread Jun Xie
Public bug reported:

CREATE TABLE subnets (tenant_id VARCHAR(255),id VARCHAR(36) NOT NULL,
name VARCHAR(255),network_id VARCHAR(36), ip_version INT NOT NULL, cidr
VARCHAR(64) NOT NULL,gateway_ip VARCHAR(64),enable_dhcp SMALLINT,shared
SMALLINT,ipv6_ra_mode VARCHAR(16), ipv6_address_mode VARCHAR(16),PRIMARY
KEY (id), FOREIGN KEY(network_id) REFERENCES networks (id),CHECK
(enable_dhcp IN (0, 1)), CHECK (shared IN (0, 1)), CONSTRAINT ipv6_modes
CHECK (ipv6_ra_mode IN ('slaac', 'dhcpv6-stateful',
'dhcpv6-stateless')),CONSTRAINT ipv6_modes CHECK (ipv6_address_mode IN
('slaac', 'dhcpv6-stateful', 'dhcpv6-stateless')))


for db2, this fails because the name ipv6_modes is used twice for a contraint 
name.  In db2, A constraint-name must not identify a constraint that was 
already specified within the same CREATE TABLE statement. (SQLSTATE 42710).
=

Checked neutron server.log and found

2014-03-18 18:37:45.799 19954 TRACE neutron Traceback (most recent call last):
2014-03-18 18:37:45.799 19954 TRACE neutron   File "/usr/bin/neutron-server", 
line 10, in 
2014-03-18 18:37:45.799 19954 TRACE neutron sys.exit(main())
2014-03-18 18:37:45.799 19954 TRACE neutron   File 
"/usr/lib/python2.6/site-packages/neutron/server/__init__.py", line 54, in main
2014-03-18 18:37:45.799 19954 TRACE neutron neutron_api = 
service.serve_wsgi(service.NeutronApiService)
2014-03-18 18:37:45.799 19954 TRACE neutron   File 
"/usr/lib/python2.6/site-packages/neutron/service.py", line 113, in serve_wsgi
2014-03-18 18:37:45.799 19954 TRACE neutron LOG.exception(_('Unrecoverable 
error: please check log '
2014-03-18 18:37:45.799 19954 TRACE neutron   File 
"/usr/lib/python2.6/site-packages/neutron/openstack/common/excutils.py", line 
68, in __exit_
_
2014-03-18 18:37:45.799 19954 TRACE neutron six.reraise(self.type_, 
self.value, self.tb)
2014-03-18 18:37:45.799 19954 TRACE neutron   File 
"/usr/lib/python2.6/site-packages/neutron/service.py", line 106, in serve_wsgi
2014-03-18 18:37:45.799 19954 TRACE neutron service.start()
2014-03-18 18:37:45.799 19954 TRACE neutron   File 
"/usr/lib/python2.6/site-packages/neutron/service.py", line 75, in start
2014-03-18 18:37:45.799 19954 TRACE neutron self.wsgi_app = 
_run_wsgi(self.app_name)
2014-03-18 18:37:45.799 19954 TRACE neutron   File 
"/usr/lib/python2.6/site-packages/neutron/service.py", line 175, in _run_wsgi
2014-03-18 18:37:45.799 19954 TRACE neutron app = 
config.load_paste_app(app_name)
2014-03-18 18:37:45.799 19954 TRACE neutron   File 
"/usr/lib/python2.6/site-packages/neutron/common/config.py", line 170, in 
load_paste_app
2014-03-18 18:37:45.799 19954 TRACE neutron app = 
deploy.loadapp("config:%s" % config_path, name=app_name)
2014-03-18 18:37:45.799 19954 TRACE neutron   File 
"/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 247, in 
loadapp
2014-03-18 18:37:45.799 19954 TRACE neutron return loadobj(APP, uri, 
name=name, **kw)
2014-03-18 18:37:45.799 19954 TRACE neutron   File 
"/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 272, in 
loadobj
2014-03-18 18:37:45.799 19954 TRACE neutron return context.create()
2014-03-18 18:37:45.799 19954 TRACE neutron   File 
"/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 710, in create
2014-03-18 18:37:45.799 19954 TRACE neutron return 
self.object_type.invoke(self)
2014-03-18 18:37:45.799 19954 TRACE neutron   File 
"/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 144, in invoke
2014-03-18 18:37:45.799 19954 TRACE neutron **context.local_conf)
2014-03-18 18:37:45.799 19954 TRACE neutron   File 
"/usr/lib/python2.6/site-packages/paste/deploy/util.py", line 56, in fix_call
2014-03-18 18:37:45.799 19954 TRACE neutron val = callable(*args, **kw)
2014-03-18 18:37:45.799 19954 TRACE neutron   File 
"/usr/lib/python2.6/site-packages/paste/urlmap.py", line 25, in urlmap_factory
2014-03-18 18:37:45.799 19954 TRACE neutron app = loader.get_app(app_name, 
global_conf=global_conf)
2014-03-18 18:37:45.799 19954 TRACE neutron   File 
"/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 350, in 
get_app
2014-03-18 18:37:45.799 19954 TRACE neutron name=name, 
global_conf=global_conf).create()
2014-03-18 18:37:45.799 19954 TRACE neutron   File 
"/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 710, in create
2014-03-18 18:37:45.799 19954 TRACE neutron return 
self.object_type.invoke(self)
2014-03-18 18:37:45.799 19954 TRACE neutron   File 
"/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 144, in invoke
2014-03-18 18:37:45.799 19954 TRACE neutron **context.local_conf)
2014-03-18 18:37:45.799 19954 TRACE neutron   File 
"/usr/lib/python2.6/site-packages/paste/deploy/util.py", line 56, in fix_call
2014-03-18 18:37:45.799 19954 TRACE neutron val = callable(*args, **kw)
2014-03-18 18:37:45.799 19954 TRACE neutron   File 
"/usr/lib/python2.6/site-packages/neutron/auth.py",

[Yahoo-eng-team] [Bug 1294556] [NEW] os-aggregates sample files miss

2014-03-19 Thread Haiwei Xu
Public bug reported:

os-aggregates' sample files are different in V2 and V3.

~$ vi /opt/stack/nova/doc/api_samples/os-aggregates/aggregates-list-get-
resp.json

  1 {
  2 "aggregates": [
  3 {
  4 "availability_zone": "nova",
  5 "created_at": "2012-11-16T06:22:23.361359",
  6 "deleted": false,★
  7 "deleted_at": null,
  8 "hosts": [],
  9 "id": 1,
 10 "metadata": {
 11 "availability_zone": "nova"
 12 },
 13 "name": "name",
 14 "updated_at": null
 15 }
 16 ]
 17 }

~$ vi /opt/stack/nova/doc/v3/api_samples/os-aggregates/aggregates-list-
get-resp.json

 1 {
  2 "aggregates": [
  3 {
  4 "availability_zone": "nova",
  5 "created_at": "2013-08-18T12:17:56.856455",
  6 "deleted": 0,★
  7 "deleted_at": null,
  8 "hosts": [],
  9 "id": 1,
 10 "metadata": {
 11 "availability_zone": "nova"
 12 },
 13 "name": "name",
 14 "updated_at": null
 15 }
 16 ]
 17 }

The 'deleted' element is 'false' in V2 but '0' in V3, and it's the same
with aggregates-get-resp.json

I also found in the response from the API, 'deleted' is 'false'

 curl -i 'http://10.21.42.98:8774/v3/os-aggregates' -X GET -H "X-Auth-
Project-Id: demo" -H "User-Agent: python-novaclient" -H "Accept:
application/json" -H "X-Auth-Token: MIISPQYJKoZIhvcNA...

RESP BODY: {"aggregates": [{"name": "agg1", "availability_zone": "nova",
★"deleted": false,★ "created_at": "2014-03-18T19:38:33.00",
"updated_at": null, "hosts": [], "deleted_at": null, "id": 1,
"metadata": {"availability_zone": "nova"}}, {"name": "agg2",
"availability_zone": null, "deleted": false, "created_at":
"2014-03-18T19:41:06.00", "updated_at": null, "hosts": [],
"deleted_at": null, "id": 2, "metadata": {}}]}

So i think this is a bug of V3 sample file.

** Affects: nova
 Importance: Undecided
 Assignee: Haiwei Xu (xu-haiwei)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Haiwei Xu (xu-haiwei)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1294556

Title:
  os-aggregates sample files miss

Status in OpenStack Compute (Nova):
  New

Bug description:
  os-aggregates' sample files are different in V2 and V3.

  ~$ vi /opt/stack/nova/doc/api_samples/os-aggregates/aggregates-list-
  get-resp.json

1 {
2 "aggregates": [
3 {
4 "availability_zone": "nova",
5 "created_at": "2012-11-16T06:22:23.361359",
6 "deleted": false,★
7 "deleted_at": null,
8 "hosts": [],
9 "id": 1,
   10 "metadata": {
   11 "availability_zone": "nova"
   12 },
   13 "name": "name",
   14 "updated_at": null
   15 }
   16 ]
   17 }

  ~$ vi /opt/stack/nova/doc/v3/api_samples/os-aggregates/aggregates-
  list-get-resp.json

   1 {
2 "aggregates": [
3 {
4 "availability_zone": "nova",
5 "created_at": "2013-08-18T12:17:56.856455",
6 "deleted": 0,★
7 "deleted_at": null,
8 "hosts": [],
9 "id": 1,
   10 "metadata": {
   11 "availability_zone": "nova"
   12 },
   13 "name": "name",
   14 "updated_at": null
   15 }
   16 ]
   17 }

  The 'deleted' element is 'false' in V2 but '0' in V3, and it's the
  same with aggregates-get-resp.json

  I also found in the response from the API, 'deleted' is 'false'

   curl -i 'http://10.21.42.98:8774/v3/os-aggregates' -X GET -H "X-Auth-
  Project-Id: demo" -H "User-Agent: python-novaclient" -H "Accept:
  application/json" -H "X-Auth-Token: MIISPQYJKoZIhvcNA...

  RESP BODY: {"aggregates": [{"name": "agg1", "availability_zone":
  "nova", ★"deleted": false,★ "created_at":
  "2014-03-18T19:38:33.00", "updated_at": null, "hosts": [],
  "deleted_at": null, "id": 1, "metadata": {"availability_zone":
  "nova"}}, {"name": "agg2", "availability_zone": null, "deleted":
  false, "created_at": "2014-03-18T19:41:06.00", "updated_at": null,
  "hosts": [], "deleted_at": null, "id": 2, "metadata": {}}]}

  So i think this is a bug of V3 sample file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1294556/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294554] [NEW] Create port ERROR in n1kv of cissco plugin

2014-03-19 Thread shihanzhang
Public bug reported:


in my unit test, create port always happen error in n1kv, my unit test in 
neutron/tests/unit/test_db_plugin.py  is:
 def test_prevent_used_dhcp_port_deletion(self):
with self.network() as network:
data = {'port': {'network_id': network['network']['id'],
 'tenant_id': 'tenant_id',
 'device_owner': constants.DEVICE_OWNER_DHCP}}
create_req = self.new_create_request('ports', data)
res = self.deserialize(self.fmt,
   create_req.get_response(self.api))
del_req = self.new_delete_request('ports', res['port']['id'])
delete_res = del_req.get_response(self.api)
self.assertEqual(delete_res.status_int,
 webob.exc.HTTPNoContent.code)

the error log is:

 Traceback (most recent call last):
   File "neutron/api/v2/resource.py", line 87, in resource
 result = method(request=request, **args)
   File "neutron/api/v2/base.py", line 419, in create
 obj = obj_creator(request.context, **kwargs)
   File "neutron/plugins/cisco/n1kv/n1kv_neutron_plugin.py", line 1188, in 
create_port
 p_profile = self._get_policy_profile_by_name(p_profile_name)
   File "neutron/plugins/cisco/db/n1kv_db_v2.py", line 1530, in 
_get_policy_profile_by_name
 filter_by(name=name).one())
   File 
"/home/jenkins/workspace/gate-neutron-python26/.tox/py26/lib/python2.6/site-packages/sqlalchemy/orm/query.py",
 line 2323, in one
 raise orm_exc.NoResultFound("No row was found for one()")
 NoResultFound: No row was found for one()

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1294554

Title:
  Create port ERROR in n1kv  of cissco plugin

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  
  in my unit test, create port always happen error in n1kv, my unit test in 
neutron/tests/unit/test_db_plugin.py  is:
   def test_prevent_used_dhcp_port_deletion(self):
  with self.network() as network:
  data = {'port': {'network_id': network['network']['id'],
   'tenant_id': 'tenant_id',
   'device_owner': constants.DEVICE_OWNER_DHCP}}
  create_req = self.new_create_request('ports', data)
  res = self.deserialize(self.fmt,
 create_req.get_response(self.api))
  del_req = self.new_delete_request('ports', res['port']['id'])
  delete_res = del_req.get_response(self.api)
  self.assertEqual(delete_res.status_int,
   webob.exc.HTTPNoContent.code)

  the error log is:

   Traceback (most recent call last):
 File "neutron/api/v2/resource.py", line 87, in resource
   result = method(request=request, **args)
 File "neutron/api/v2/base.py", line 419, in create
   obj = obj_creator(request.context, **kwargs)
 File "neutron/plugins/cisco/n1kv/n1kv_neutron_plugin.py", line 1188, in 
create_port
   p_profile = self._get_policy_profile_by_name(p_profile_name)
 File "neutron/plugins/cisco/db/n1kv_db_v2.py", line 1530, in 
_get_policy_profile_by_name
   filter_by(name=name).one())
 File 
"/home/jenkins/workspace/gate-neutron-python26/.tox/py26/lib/python2.6/site-packages/sqlalchemy/orm/query.py",
 line 2323, in one
   raise orm_exc.NoResultFound("No row was found for one()")
   NoResultFound: No row was found for one()

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1294554/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294541] [NEW] shared firewall policies can't be displayed in horizon

2014-03-19 Thread Yaguang Tang
Public bug reported:

tenant  A create a shared firewall policy can't be seen by tenant B in
horizon but is listed  when using python-neutronclient.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1294541

Title:
  shared firewall policies can't be displayed in horizon

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  tenant  A create a shared firewall policy can't be seen by tenant B in
  horizon but is listed  when using python-neutronclient.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1294541/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294537] [NEW] Sync excutils from oslo

2014-03-19 Thread Oleg Bondarev
Public bug reported:

In order to fix undesired error logs in Neutron (bug 1288188) fixed
save_and_reraise_exception() should be synced from oslo.

** Affects: neutron
 Importance: Undecided
 Assignee: Oleg Bondarev (obondarev)
 Status: In Progress

** Description changed:

- In order to fix undesired error logs in Neutron (bug 1288188) fixed 
save_and_reraise_exception()
- should be synced from oslo.
+ In order to fix undesired error logs in Neutron (bug 1288188) fixed
+ save_and_reraise_exception() should be synced from oslo.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1294537

Title:
  Sync excutils from oslo

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  In order to fix undesired error logs in Neutron (bug 1288188) fixed
  save_and_reraise_exception() should be synced from oslo.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1294537/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294532] [NEW] Create user with tenantid failed when using ldap driver

2014-03-19 Thread nethawk
Public bug reported:

When using ldap as identity driver instead of sql, creating user with tenantid 
failed.
For example,when using this command:keystone user-create --name demo --pass 
demo --tenant-id XX, it returns this error: ERROR {'info': 
'tenantId: attribute type undefined', 'desc': 'Undefined attribute type'}.

To resolve this bug, we must modify the core.py in the path
keystone/common/ldap.

In BaseLdap.create(),there si a  statement like this :if k == 'id' or k in 
self.attribute_ignore: continue
it must be changed to this one:
 
if k == 'id' or k in self.attribute_ignore or k == 'tenantId':
continue

then the above user-create command can success.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1294532

Title:
  Create user with tenantid failed when using ldap driver

Status in OpenStack Identity (Keystone):
  New

Bug description:
  When using ldap as identity driver instead of sql, creating user with 
tenantid failed.
  For example,when using this command:keystone user-create --name demo --pass 
demo --tenant-id XX, it returns this error: ERROR {'info': 
'tenantId: attribute type undefined', 'desc': 'Undefined attribute type'}.

  To resolve this bug, we must modify the core.py in the path
  keystone/common/ldap.

  In BaseLdap.create(),there si a  statement like this :if k == 'id' or k 
in self.attribute_ignore: continue
  it must be changed to this one:
   
  if k == 'id' or k in self.attribute_ignore or k == 'tenantId':
  continue

  then the above user-create command can success.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1294532/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294526] [NEW] floatingip's id should be used instead of floatingip itself

2014-03-19 Thread yong sheng gong
Public bug reported:

https://github.com/openstack/neutron/blob/master/neutron/agent/l3_agent.py#L476

this id should be used as hash key instead of floating ip itself

** Affects: neutron
 Importance: Medium
 Assignee: yong sheng gong (gongysh)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1294526

Title:
  floatingip's id should be used instead of floatingip itself

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  
https://github.com/openstack/neutron/blob/master/neutron/agent/l3_agent.py#L476

  this id should be used as hash key instead of floating ip itself

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1294526/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294527] [NEW] nec plugin: should honor retry-after from openflow controller

2014-03-19 Thread Akihiro Motoki
Public bug reported:

OpenFlow controller which nec plugin talks to sometimes returns retry-
after when it is busy. It is better to honor retry-after header to avoid
unnecessary user-visible errors due to temporary busy condition.

** Affects: neutron
 Importance: Low
 Assignee: Akihiro Motoki (amotoki)
 Status: In Progress


** Tags: nec

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1294527

Title:
  nec plugin: should honor retry-after from openflow controller

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  OpenFlow controller which nec plugin talks to sometimes returns retry-
  after when it is busy. It is better to honor retry-after header to
  avoid unnecessary user-visible errors due to temporary busy condition.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1294527/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294509] [NEW] NoneType is returned from libvirt while get_vcpu_used

2014-03-19 Thread wangpan
Public bug reported:

169568 2014-03-19 14:46:35.193 43907 TRACE nova.compute.manager [instance: 
94518175-a4ad-4b5c-be76-3888b09810b4] 
169569 2014-03-19 14:46:35.193 43907 TRACE nova.compute.manager [instance: 
94518175-a4ad-4b5c-be76-3888b09810b4] File 
"/usr/lib/python2.7/dist-packages/nova/network/manager.py", line 1031, in 
allocate_fixed_ip
169570 2014-03-19 14:46:35.193 43907 TRACE nova.compute.manager [instance: 
94518175-a4ad-4b5c-be76-3888b09810b4] self.add_fixed_ip_to_ipset(context, 
address, tenant_id)
169571 2014-03-19 14:46:35.193 43907 TRACE nova.compute.manager [instance: 
94518175-a4ad-4b5c-be76-3888b09810b4] 
169572 2014-03-19 14:46:35.193 43907 TRACE nova.compute.manager [instance: 
94518175-a4ad-4b5c-be76-3888b09810b4] File 
"/usr/lib/python2.7/dist-packages/nova/network/manager.py", line 896, in 
add_fixed_ip_to_ipset
169573 2014-03-19 14:46:35.193 43907 TRACE nova.compute.manager [instance: 
94518175-a4ad-4b5c-be76-3888b09810b4] tenant_id = 
fixed_ip_ref.instance['project_id']
169574 2014-03-19 14:46:35.193 43907 TRACE nova.compute.manager [instance: 
94518175-a4ad-4b5c-be76-3888b09810b4] 
169575 2014-03-19 14:46:35.193 43907 TRACE nova.compute.manager [instance: 
94518175-a4ad-4b5c-be76-3888b09810b4] TypeError: 'NoneType' object has no 
attribute '__getitem__'
169576 2014-03-19 14:46:35.193 43907 TRACE nova.compute.manager [instance: 
94518175-a4ad-4b5c-be76-3888b09810b4] 
169577 2014-03-19 14:46:35.193 43907 TRACE nova.compute.manager [instance: 
94518175-a4ad-4b5c-be76-3888b09810b4] 

this because of my commit as below, 
Reviewed: https://review.openstack.org/67361
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=844df860c38ac38550b8d1739fd53131cd7fd864
Submitter: Jenkins
Branch: master

commit 844df860c38ac38550b8d1739fd53131cd7fd864
Author: Wangpan 
Date: Fri Jan 17 10:35:12 2014 +0800

libvirt: handle exception while get vcpu info

If an exception is raised while get a libvirt domain's vcpu info,
the update_available_resource periodic task will be failed, which
will result in the resource of this compute node will never be
reported.

This patch add an exception handling to avoid this situation.
Closes-bug: #1270008

the checking of return value is None was removed, but old python-libvirt(e.g. 
0.9.x) only raise libvirtError while the libvirt api return -1, and the old 
libvirt return None rather than -1 while get vcpu info failed, please see the 
libvirt codes below:
python/libvirt-override.c:
static PyObject *   

libvirt_virDomainGetVcpus(PyObject *self ATTRIBUTE_UNUSED,  

  PyObject *args) { 

virDomainPtr domain;

PyObject *pyobj_domain, *pyretval = NULL, *pycpuinfo = NULL, *pycpumap = 
NULL;
virNodeInfo nodeinfo;   

virDomainInfo dominfo;  

virVcpuInfoPtr cpuinfo = NULL;  

unsigned char *cpumap = NULL;   

size_t cpumaplen, i;

int i_retval;   



if (!PyArg_ParseTuple(args, (char *)"O:virDomainGetVcpus",  

  &pyobj_domain))   

return NULL;

domain = (virDomainPtr) PyvirDomain_Get(pyobj_domain);  



LIBVIRT_BEGIN_ALLOW_THREADS;

i_retval = virNodeGetInfo(virDomainGetConnect(domain), &nodeinfo);  

LIBVIRT_END_ALLOW_THREADS;  

if (i_retval < 0)   

return VIR_PY_NONE; 



LIBVIRT_BEGIN_ALLOW_THREADS;

i_retval = virDomainGetInfo(domain, &dominfo);  

LIBVIRT_END_ALLOW_THREADS;  

if (i_retval < 0)   

return VIR_PY_NONE;