[Yahoo-eng-team] [Bug 1260617] [NEW] Provide the ability to attach volumes in the read-only mode

2013-12-12 Thread Zhenguo Niu
Public bug reported:

Cinder now support the ability to attach volumes in the read-only mode,
this should be exposed through horizon. Read-only mode could be ensured
by hypervisor configuration during the attachment. Libvirt, Xen, VMware
and Hyper-V support R/O volumes.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1260617

Title:
  Provide the ability to attach volumes in the read-only mode

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Cinder now support the ability to attach volumes in the read-only
  mode, this should be exposed through horizon. Read-only mode could be
  ensured by hypervisor configuration during the attachment. Libvirt,
  Xen, VMware and Hyper-V support R/O volumes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1260617/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260618] [NEW] nova-api, nova-cert, nova-network not shown after installation. please guide.

2013-12-12 Thread yashjit
Public bug reported:

Please help me - Issue description : I install havana ( with nova-network as i 
am very new to it) 1 controller node and 1 compute node. This topology and 
script i used to install grizzly which got installed. So very minor changes in 
script were made. Now for havana few minor changes i made in conf files -
Nova.conf - [database] connection is used rather sql_connection
Cinder - Same as above.

Now when I launch script it runs and on controller service shown using 
nova-manage service list
Binary   Host Zone  
   Status State Updated_At
nova-conductor control  internal 
enabled :-)2013-12-13 06:36:41
nova-consoleauth control   internal enabled 
:-)   2013-12-13 06:36:41
nova-scheduler  control  internal 
enabled :-)2013-12-13 06:36:42
root@control:/etc# ps aux | grep nova
nova  1471  0.0  1.6  67772 ?Ss   Dec12   0:35 /usr/bin/python 
/usr/bin/nova-consoleauth --config-file=/etc/nova/nova.conf
nova  1472  0.0  1.6  67808 ?Ss   Dec12   0:35 /usr/bin/python 
/usr/bin/nova-conductor --config-file=/etc/nova/nova.conf
nova  1474  0.0  1.6  68416 ?Ss   Dec12   0:40 /usr/bin/python 
/usr/bin/nova-scheduler --config-file=/etc/nova/nova.conf
nova  1476  0.0  0.7  32276 ?Ss   Dec12   0:12 /usr/bin/python 
/usr/bin/nova-novncproxy --config-file=/etc/nova/nova.conf
root  4479  0.0  0.0  13588   912 pts/2S+   12:08   0:00 grep 
--color=auto nova

Same way on compute Node script runs well but nova-compute, 
nova-network,nova-meata-api does not start. When restarted service on 
controller node result is like below-
root@control:/etc# cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i 
restart; done
stop: Unknown instance: 
nova-api start/running, process 4497
stop: Unknown instance: 
nova-cert start/running, process 4508
nova-conductor stop/waiting
nova-conductor start/running, process 4519
nova-consoleauth stop/waiting
nova-consoleauth start/running, process 4530
nova-novncproxy stop/waiting
nova-novncproxy start/running, process 4541
nova-scheduler stop/waiting
nova-scheduler start/running, process 4556
so i find nova-api, nova-cert, nova-network,nova-metadata-api services are not 
running. Now the Log and Conf files- Log first

1. nova-api -
2013-12-12 16:27:43.068 11954 INFO nova.wsgi [-] osapi_compute listening on 
0.0.0.0:8774
2013-12-12 16:27:43.068 11954 INFO nova.openstack.common.service [-] Starting 1 
workers
2013-12-12 16:27:43.070 11954 INFO nova.openstack.common.service [-] Started 
child 12085
2013-12-12 16:27:43.082 11954 INFO nova.network.driver [-] Loading network 
driver 'nova.network.linux_net'
2013-12-12 16:27:43.088 11954 INFO nova.wsgi [-] metadata listening on 
0.0.0.0:8775
2013-12-12 16:27:43.093 11954 INFO nova.openstack.common.service [-] Starting 1 
workers
2013-12-12 16:27:43.095 11954 INFO nova.openstack.common.service [-] Started 
child 12086
2013-12-12 16:27:43.075 12085 INFO nova.osapi_compute.wsgi.server [-] (12085) 
wsgi starting up on http://0.0.0.0:8774/
2013-12-12 16:27:44.005 12086 INFO nova.metadata.wsgi.server [-] (12086) wsgi 
starting up on http://0.0.0.0:8775/
2013-12-12 16:29:32.864 12036 INFO nova.openstack.common.service [-] Caught 
SIGTERM, exiting
2013-12-12 16:29:32.864 12086 INFO nova.openstack.common.service [-] Caught 
SIGTERM, exiting
2013-12-12 16:29:32.864 12085 INFO nova.openstack.common.service [-] Caught 
SIGTERM, exiting
2013-12-12 16:29:32.864 12036 INFO nova.wsgi [-] Stopping WSGI server.
2013-12-12 16:29:32.864 12086 INFO nova.wsgi [-] Stopping WSGI server.
2013-12-12 16:29:32.865 12085 INFO nova.wsgi [-] Stopping WSGI server.
2013-12-12 16:29:32.867 11954 INFO nova.openstack.common.service [-] Caught 
SIGTERM, stopping children
2013-12-12 16:29:32.867 11954 INFO nova.openstack.common.service [-] Waiting on 
3 children to exit
2013-12-12 16:29:32.868 11954 INFO nova.openstack.common.service [-] Child 
12086 exited with status 1
2013-12-12 16:29:32.868 11954 INFO nova.openstack.common.service [-] Child 
12036 exited with status 1
2013-12-12 16:29:32.869 11954 INFO nova.openstack.common.service [-] Child 
12085 exited with status 1

2. Nova-cert-
2013-12-12 16:27:42.600 11994 INFO nova.openstack.common.periodic_task [-] 
Skipping periodic task _periodic_update_dns because its interval is negative
2013-12-12 16:27:42.634 11994 AUDIT nova.service [-] Starting cert node 
(version 2013.2)
2013-12-12 16:27:43.449 11994 INFO nova.openstack.common.rpc.common 
[req-cc5378ca-5534-48ac-855b-af1851b84cbd None None] Connected to AMQP server 
on localhost:5672
2013-12-12 16:29:32.967 11994 INFO nova.openstack.common.service [-] Caught 
SIGTERM, exiting

3. nova-schduler
2013-12-12 16:27:45.000 12184 INFO nova.openstack.common.periodic_task [-] 
Skipping periodic task _period

[Yahoo-eng-team] [Bug 1260489] Re: --debug flag not working in neutron

2013-12-12 Thread Abhishek Chanda
>From the description, this looks like a neutronclient issue. I've added
it and marked the bug in nova as invalid. Please assign the bug to
yourself in neutronclient.

** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260489

Title:
  --debug flag not working in neutron

Status in OpenStack Compute (Nova):
  Invalid
Status in Python client library for Neutron:
  New

Bug description:
  This is with the neutron master branch, in a single node devstack
  setup. The branch is at commit
  3b4233873539bad62d202025529678a5b0add412.

  If I use the --debug flag in a neutron CLI, for example, port-list, I
  don't see any debug output:

  cloud@controllernode:/opt/stack/neutron$ neutron --debug port-list
  
+--+--+---+-+
  | id   | name | mac_address   | fixed_ips 
  |
  
+--+--+---+-+
  | 6c26cdc1-acc1-439c-bb47-d343085b7b78 |  | fa:16:3e:32:2c:eb | 
{"subnet_id": "37f15352-e816-4a03-b58c-b4d5c1fa8e2a", "ip_address": "10.0.0.2"} 
|
  | f09b14b2-3162-4212-9d91-f97b22c95f31 |  | fa:16:3e:99:08:6b | 
{"subnet_id": "d4717b67-fd64-45ed-b22c-dedbd23afff3", "ip_address": 
"172.24.4.226"} |
  | f0ba4efd-12ca-4d56-8c7d-e879e4150a63 |  | fa:16:3e:02:41:47 | 
{"subnet_id": "37f15352-e816-4a03-b58c-b4d5c1fa8e2a", "ip_address": "10.0.0.1"} 
|
  
+--+--+---+-+
  cloud@controllernode:/opt/stack/neutron$ 

  
  On the other hand, if I use the --debug flag for nova, for example, nova 
list, I see the curl request and response showing up:

  
  cloud@controllernode:/opt/stack/neutron$ nova --debug list

  REQ: curl -i 'http://192.168.52.85:5000/v2.0/tokens' -X POST -H
  "Content-Type: application/json" -H "Accept: application/json" -H
  "User-Agent: python-novaclient" -d '{"auth": {"tenantName": "admin",
  "passwordCredentials": {"username": "admin", "password":
  "password"}}}'

  RESP: [200] CaseInsensitiveDict({'date': 'Thu, 05 Dec 2013 23:41:07 GMT', 
'vary': 'X-Auth-Token', 'content-length': '8255', 'content-type': 
'application/json'})
  RESP BODY: {"access": {"token": {"issued_at": "2013-12-05T23:41:07.307915", 
"expires": "2013-12-06T23:41:07Z", "id": 
"MIIOkwYJKoZIhvcNAQcCoIIOhDCCDoACAQExCTAHBgUrDgMCGjCCDOkGCSqGSIb3DQEHAaCCDNoEggzWeyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxMy0xMi0wNVQyMzo0MTowNy4zMDc5MTUiLCAiZXhwaXJlcyI6ICIyMDEzLTEyLTA2VDIzOjQxOjA3WiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogbnVsbCwgImVuYWJsZWQiOiB0cnVlLCAiaWQiOiAiYTdiMzk2MGI5NzkyNGJhYjlhNTVhOWY5ZjY4NGE4NzAiLCAibmFtZSI6ICJhZG1pbiJ9fSwgInNlcnZpY2VDYXRhbG9nIjogW3siZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzE5Mi4xNjguNTIuODU6ODc3NC92Mi9hN2IzOTYwYjk3OTI0YmFiOWE1NWE5ZjlmNjg0YTg3MCIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xOTIuMTY4LjUyLjg1Ojg3NzQvdjIvYTdiMzk2MGI5NzkyNGJhYjlhNTVhOWY5ZjY4NGE4NzAiLCAiaWQiOiAiMDQyMzVjMmE1ODNlNDAwZDg1NTBkYTI0NmNiZDI1YWEiLCAicHVibGljVVJMIjogImh0dHA6Ly8xOTIuMTY4LjUyLjg1Ojg3NzQvdjIvYTdiMzk2MGI5NzkyNGJhYjlhNTVhOWY5ZjY4NGE4NzAifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOi
 
AiY29tcHV0ZSIsICJuYW1lIjogIm5vdmEifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTkyLjE2OC41Mi44NTo5Njk2LyIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xOTIuMTY4LjUyLjg1Ojk2OTYvIiwgImlkIjogIjYyNWI1YzM3ZDJlYzQ4ZGRhMTRmZGZmZmMyZjBhMTY0IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTkyLjE2OC41Mi44NTo5Njk2LyJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJuZXR3b3JrIiwgIm5hbWUiOiAibmV1dHJvbiJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xOTIuMTY4LjUyLjg1Ojg3NzYvdjIvYTdiMzk2MGI5NzkyNGJhYjlhNTVhOWY5ZjY4NGE4NzAiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTkyLjE2OC41Mi44NTo4Nzc2L3YyL2E3YjM5NjBiOTc5MjRiYWI5YTU1YTlmOWY2ODRhODcwIiwgImlkIjogIjNmODVjN2ZmZjNjMzRmNWNiMzlmMTZiMzQ2ZmY1Mjc0IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTkyLjE2OC41Mi44NTo4Nzc2L3YyL2E3YjM5NjBiOTc5MjRiYWI5YTU1YTlmOWY2ODRhODcwIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInZvbHVtZXYyIiwgIm5hbWUiOiAiY2luZGVyIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzE5Mi4xNjguNTIuODU6ODc3NC92MyIsICJyZWd
 
pb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xOTIuMTY4LjUyLjg1Ojg3NzQvdjMiLCAiaWQiOiAiYTM4NjBlZTM3MWEyNDIxNGFlYTBiODk5M2I1YTY0OTciLCAicHVibGljVVJMIjogImh0dHA6Ly8xOTIuMTY4LjUyLjg1Ojg3NzQvdjMifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR

[Yahoo-eng-team] [Bug 1260598] [NEW] ml2 race in network creation

2013-12-12 Thread Isaku Yamahata
Public bug reported:

There are races in network create by ML2 type driver.

** Affects: neutron
 Importance: Undecided
 Assignee: Isaku Yamahata (yamahata)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1260598

Title:
  ml2 race in network creation

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  There are races in network create by ML2 type driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1260598/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260144] Re: auth_token is very long

2013-12-12 Thread chenhaiq
it is default to pki provider, which is very long

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1260144

Title:
  auth_token is very long

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  In my devstack env, the auth_token is very very long

  $ keystone token-get

  
  
+---+---
 
-
 
-
 
-
 
---

[Yahoo-eng-team] [Bug 1260588] [NEW] Change retry to attempt for retry filter logic

2013-12-12 Thread Jay Lau
Public bug reported:

After patch of Ia355810b106fee14a55f48081301a310979befac,  retry filter
was renamed to IgnoreAttemptedHostsFilter and its variable retry was
changed to attempt, so it is better to update nova scheduler and compute
logic by replace retry to attempt

** Affects: nova
 Importance: Undecided
 Assignee: Jay Lau (jay-lau-513)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Jay Lau (jay-lau-513)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260588

Title:
  Change retry to attempt for retry filter logic

Status in OpenStack Compute (Nova):
  New

Bug description:
  After patch of Ia355810b106fee14a55f48081301a310979befac,  retry
  filter was renamed to IgnoreAttemptedHostsFilter and its variable
  retry was changed to attempt, so it is better to update nova scheduler
  and compute logic by replace retry to attempt

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260588/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1256182] Re: test_associate_ip_to_server_without_passing_floating_ip failed due to invalid assertion

2013-12-12 Thread Qiu Hua Qiao
** Changed in: tempest
   Status: Invalid => New

** Project changed: tempest => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1256182

Title:
  test_associate_ip_to_server_without_passing_floating_ip failed due to
  invalid assertion

Status in OpenStack Compute (Nova):
  New

Bug description:
  test_associate_ip_to_server_without_passing_floating_ip and
  test_associate_nonexistant_floating_ip in
  tempest/api/compute/floating_ips/test_floating_ips_actions.py failed.

  The openstack will return code 400 bad request when execute these cases, 
while the cases use not found(404) to do assertion:
  @attr(type=['negative', 'gate'])
  def test_associate_ip_to_server_without_passing_floating_ip(self):
  # Negative test:Association of empty floating IP to specific server
  # should raise NotFound exception
  self.assertRaises(exceptions.NotFound,
self.client.associate_floating_ip_to_server,
'', self.server_id)

  --
  _StringException: pythonlogging:'': {{{
  2013-11-28 09:11:15,651 Request: POST 
http://192.168.4.5:8774/v2/a3654af9cfbd4cde86aa963710054727/servers/1df5edf9-7376-4b58-9b79-c8f90415df8b/action
  2013-11-28 09:11:15,653 Request Headers: {'Content-Type': 'application/json', 
'Accept': 'application/json', 'X-Auth-Token': ''}
  2013-11-28 09:11:15,655 Request Body: {"addFloatingIp": {"address": ""}}
  2013-11-28 09:11:15,778 Response Status: 400
  2013-11-28 09:11:15,779 Nova request id: 
req-d1300fe2-b94f-41af-85ca-6038a2c18621
  2013-11-28 09:11:15,781 Response Headers: {'content-length': '96', 'date': 
'Thu, 28 Nov 2013 15:11:15 GMT', 'content-type': 'application/json; 
charset=UTF-8', 'connection': 'close'}
  2013-11-28 09:11:15,782 Response Body: {"badRequest": {"message": "NV-676D697 
No nw_info cache associated with instance", "code": 400}}
  }}}

  Traceback (most recent call last):
File 
"/tmp/tempest/tempest/tempest/api/compute/floating_ips/test_floating_ips_actions.py",
 line 182, in test_associate_ip_to_server_without_passing_floating_ip
  '', self.server_id)
File "/usr/lib/python2.6/site-packages/testtools/testcase.py", line 394, in 
assertRaises
  self.assertThat(our_callable, matcher)
File "/usr/lib/python2.6/site-packages/testtools/testcase.py", line 406, in 
assertThat
  mismatch = matcher.match(matchee)
File "/usr/lib/python2.6/site-packages/testtools/matchers/_exception.py", 
line 99, in match
  mismatch = self.exception_matcher.match(exc_info)
File "/usr/lib/python2.6/site-packages/testtools/matchers/_higherorder.py", 
line 61, in match
  mismatch = matcher.match(matchee)
File "/usr/lib/python2.6/site-packages/testtools/testcase.py", line 386, in 
match
  reraise(*matchee)
File "/usr/lib/python2.6/site-packages/testtools/matchers/_exception.py", 
line 92, in match
  result = matchee()
File "/usr/lib/python2.6/site-packages/testtools/testcase.py", line 883, in 
__call__
  return self._callable_object(*self._args, **self._kwargs)
File 
"/tmp/tempest/tempest/tempest/services/compute/json/floating_ips_client.py", 
line 75, in associate_floating_ip_to_server
  resp, body = self.post(url, post_body, self.headers)
File "/tmp/tempest/tempest/tempest/common/rest_client.py", line 302, in post
  return self.request('POST', url, headers, body)
File "/tmp/tempest/tempest/tempest/common/rest_client.py", line 436, in 
request
  resp, resp_body)
File "/tmp/tempest/tempest/tempest/common/rest_client.py", line 486, in 
_error_checker
  raise exceptions.BadRequest(resp_body)
  BadRequest: Bad request
  Details: {u'badRequest': {u'message': u'NV-676D697 No nw_info cache 
associated with instance', u'code': 400}}


  @attr(type=['negative', 'gate'])
  def test_dissociate_nonexistant_floating_ip(self):
  # Negative test:Dissociation of a non existent floating IP should fail
  # Dissociating non existent floating IP
  self.assertRaises(exceptions.NotFound,
self.client.disassociate_floating_ip_from_server,
"0.0.0.0", self.server_id)

  --
  _StringException: pythonlogging:'': {{{
  2013-11-28 09:11:16,037 Request: POST 
http://192.168.4.5:8774/v2/a3654af9cfbd4cde86aa963710054727/servers/1df5edf9-7376-4b58-9b79-c8f90415df8b/action
  2013-11-28 09:11:16,038 Request Headers: {'Content-Type': 'application/json', 
'Accept': 'application/json', 'X-Auth-Token': ''}
  2013-11-28 09:11:16,040 Request Body: {"addFloatingIp": {"address": 
"0.0.0.0"}}
  2013-11-28 09:11:16,147 Response Status: 400
  2013-11-28 09:11:16,149 Nova request id: 
req-96de6b94-bbb0-494e-9f4a-bbfeea9188e2
  201

[Yahoo-eng-team] [Bug 1256182] [NEW] test_associate_ip_to_server_without_passing_floating_ip failed due to invalid assertion

2013-12-12 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

test_associate_ip_to_server_without_passing_floating_ip and
test_associate_nonexistant_floating_ip in
tempest/api/compute/floating_ips/test_floating_ips_actions.py failed.

The openstack will return code 400 bad request when execute these cases, while 
the cases use not found(404) to do assertion:
@attr(type=['negative', 'gate'])
def test_associate_ip_to_server_without_passing_floating_ip(self):
# Negative test:Association of empty floating IP to specific server
# should raise NotFound exception
self.assertRaises(exceptions.NotFound,
  self.client.associate_floating_ip_to_server,
  '', self.server_id)

--
_StringException: pythonlogging:'': {{{
2013-11-28 09:11:15,651 Request: POST 
http://192.168.4.5:8774/v2/a3654af9cfbd4cde86aa963710054727/servers/1df5edf9-7376-4b58-9b79-c8f90415df8b/action
2013-11-28 09:11:15,653 Request Headers: {'Content-Type': 'application/json', 
'Accept': 'application/json', 'X-Auth-Token': ''}
2013-11-28 09:11:15,655 Request Body: {"addFloatingIp": {"address": ""}}
2013-11-28 09:11:15,778 Response Status: 400
2013-11-28 09:11:15,779 Nova request id: 
req-d1300fe2-b94f-41af-85ca-6038a2c18621
2013-11-28 09:11:15,781 Response Headers: {'content-length': '96', 'date': 
'Thu, 28 Nov 2013 15:11:15 GMT', 'content-type': 'application/json; 
charset=UTF-8', 'connection': 'close'}
2013-11-28 09:11:15,782 Response Body: {"badRequest": {"message": "NV-676D697 
No nw_info cache associated with instance", "code": 400}}
}}}

Traceback (most recent call last):
  File 
"/tmp/tempest/tempest/tempest/api/compute/floating_ips/test_floating_ips_actions.py",
 line 182, in test_associate_ip_to_server_without_passing_floating_ip
'', self.server_id)
  File "/usr/lib/python2.6/site-packages/testtools/testcase.py", line 394, in 
assertRaises
self.assertThat(our_callable, matcher)
  File "/usr/lib/python2.6/site-packages/testtools/testcase.py", line 406, in 
assertThat
mismatch = matcher.match(matchee)
  File "/usr/lib/python2.6/site-packages/testtools/matchers/_exception.py", 
line 99, in match
mismatch = self.exception_matcher.match(exc_info)
  File "/usr/lib/python2.6/site-packages/testtools/matchers/_higherorder.py", 
line 61, in match
mismatch = matcher.match(matchee)
  File "/usr/lib/python2.6/site-packages/testtools/testcase.py", line 386, in 
match
reraise(*matchee)
  File "/usr/lib/python2.6/site-packages/testtools/matchers/_exception.py", 
line 92, in match
result = matchee()
  File "/usr/lib/python2.6/site-packages/testtools/testcase.py", line 883, in 
__call__
return self._callable_object(*self._args, **self._kwargs)
  File 
"/tmp/tempest/tempest/tempest/services/compute/json/floating_ips_client.py", 
line 75, in associate_floating_ip_to_server
resp, body = self.post(url, post_body, self.headers)
  File "/tmp/tempest/tempest/tempest/common/rest_client.py", line 302, in post
return self.request('POST', url, headers, body)
  File "/tmp/tempest/tempest/tempest/common/rest_client.py", line 436, in 
request
resp, resp_body)
  File "/tmp/tempest/tempest/tempest/common/rest_client.py", line 486, in 
_error_checker
raise exceptions.BadRequest(resp_body)
BadRequest: Bad request
Details: {u'badRequest': {u'message': u'NV-676D697 No nw_info cache associated 
with instance', u'code': 400}}


@attr(type=['negative', 'gate'])
def test_dissociate_nonexistant_floating_ip(self):
# Negative test:Dissociation of a non existent floating IP should fail
# Dissociating non existent floating IP
self.assertRaises(exceptions.NotFound,
  self.client.disassociate_floating_ip_from_server,
  "0.0.0.0", self.server_id)

--
_StringException: pythonlogging:'': {{{
2013-11-28 09:11:16,037 Request: POST 
http://192.168.4.5:8774/v2/a3654af9cfbd4cde86aa963710054727/servers/1df5edf9-7376-4b58-9b79-c8f90415df8b/action
2013-11-28 09:11:16,038 Request Headers: {'Content-Type': 'application/json', 
'Accept': 'application/json', 'X-Auth-Token': ''}
2013-11-28 09:11:16,040 Request Body: {"addFloatingIp": {"address": "0.0.0.0"}}
2013-11-28 09:11:16,147 Response Status: 400
2013-11-28 09:11:16,149 Nova request id: 
req-96de6b94-bbb0-494e-9f4a-bbfeea9188e2
2013-11-28 09:11:16,151 Response Headers: {'content-length': '96', 'date': 
'Thu, 28 Nov 2013 15:11:16 GMT', 'content-type': 'application/json; 
charset=UTF-8', 'connection': 'close'}
2013-11-28 09:11:16,152 Response Body: {"badRequest": {"message": "NV-676D697 
No nw_info cache associated with instance", "code": 400}}
}}}

Traceback (most recent call last):
  File 
"/tmp/tempest/tempest/tempest/api/compute/floating_ips/test_floating_ips_actions.py",
 line 134, in test_associate_nonexistant_floating_ip
"0.0.0.0

[Yahoo-eng-team] [Bug 1260581] [NEW] VM live migration process, the destination resource will not be charged

2013-12-12 Thread Haojie Jia
Public bug reported:

Virtual machine live migration process, the destination resource does
not change anything. If the virtual machine migration process, just
create a new virtual machine on the destination host, it may cause the
migration to fail. Thus, during the migration process, I suggest pre-
charged to the resources

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260581

Title:
  VM live migration process, the destination resource will not be
  charged

Status in OpenStack Compute (Nova):
  New

Bug description:
  Virtual machine live migration process, the destination resource does
  not change anything. If the virtual machine migration process, just
  create a new virtual machine on the destination host, it may cause the
  migration to fail. Thus, during the migration process, I suggest pre-
  charged to the resources

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260574] [NEW] tempest.api.compute.v3.servers.test_server_rescue.ServerRescueV3TestXML.test_rescued_vm_detach_volume fails gate job sporadically

2013-12-12 Thread Peter Portante
Public bug reported:


See: http://logs.openstack.org/06/61906/1/check/check-tempest-dsvm-
full/81e2893/console.html

If the log becomes archived, you might have to add a ".gz" to the above
URL.

2013-12-13 02:40:08.447 | 
==
2013-12-13 02:40:08.447 | FAIL: 
tempest.api.compute.v3.servers.test_server_rescue.ServerRescueV3TestXML.test_rescued_vm_detach_volume[gate,negative]
2013-12-13 02:40:08.447 | 
tempest.api.compute.v3.servers.test_server_rescue.ServerRescueV3TestXML.test_rescued_vm_detach_volume[gate,negative]
2013-12-13 02:40:08.447 | 
--
2013-12-13 02:40:08.448 | _StringException: Empty attachments:
2013-12-13 02:40:08.448 |   stderr
2013-12-13 02:40:08.448 |   stdout
2013-12-13 02:40:08.449 | 
2013-12-13 02:40:08.449 | pythonlogging:'': {{{
2013-12-13 02:40:08.449 | 2013-12-13 02:27:53,777 Request: POST 
http://127.0.0.1:8774/v3/servers/996af184-3abe-4f9e-98ce-001b236f0be3/action
2013-12-13 02:40:08.449 | 2013-12-13 02:27:53,777 Request Headers: 
{'Content-Type': 'application/xml', 'Accept': 'application/xml', 
'X-Auth-Token': ''}
2013-12-13 02:40:08.449 | 2013-12-13 02:27:53,777 Request Body: 
2013-12-13 02:40:08.449 | http://docs.openstack.org/compute/api/v1.1"; 
volume_id="7d7dc13f-99f3-4d0b-a3f7-66d1e96674d2"/>
2013-12-13 02:40:08.450 | 2013-12-13 02:27:54,398 Response Status: 202
2013-12-13 02:40:08.450 | 2013-12-13 02:27:54,398 Nova request id: 
req-c74f2ee3-0606-44c3-bb5a-47d1b9ae9d4c
2013-12-13 02:40:08.450 | 2013-12-13 02:27:54,399 Response Headers: 
{'content-length': '0', 'date': 'Fri, 13 Dec 2013 02:27:54 GMT', 
'content-type': 'application/xml', 'connection': 'close'}
2013-12-13 02:40:08.450 | 2013-12-13 02:27:54,399 Request: GET 
http://127.0.0.1:8776/v1/e571f615fc6141808e95a5c3717a1208/volumes/7d7dc13f-99f3-4d0b-a3f7-66d1e96674d2
2013-12-13 02:40:08.450 | 2013-12-13 02:27:54,399 Request Headers: 
{'Content-Type': 'application/xml', 'Accept': 'application/xml', 
'X-Auth-Token': ''}
2013-12-13 02:40:08.451 | 2013-12-13 02:27:54,506 Response Status: 200
2013-12-13 02:40:08.451 | 2013-12-13 02:27:54,507 Nova request id: 
req-366446b3-ae44-4ce6-b13d-726033efdb93
2013-12-13 02:40:08.451 | 2013-12-13 02:27:54,507 Response Headers: 
{'content-length': '519', 'content-location': 
u'http://127.0.0.1:8776/v1/e571f615fc6141808e95a5c3717a1208/volumes/7d7dc13f-99f3-4d0b-a3f7-66d1e96674d2',
 'date': 'Fri, 13 Dec 2013 02:27:54 GMT', 'content-type': 'application/xml', 
'connection': 'close'}
2013-12-13 02:40:08.451 | 2013-12-13 02:27:54,507 Response Body: 
2013-12-13 02:40:08.452 | http://docs.openstack.org/volume/ext/volume_image_metadata/api/v1";
 xmlns:atom="http://www.w3.org/2005/Atom"; 
xmlns="http://docs.openstack.org/volume/api/v1"; status="attaching" 
display_name="test_detach" availability_zone="nova" bootable="false" 
created_at="2013-12-13 02:27:12" display_description="None" volume_type="None" 
snapshot_id="None" source_volid="None" 
id="7d7dc13f-99f3-4d0b-a3f7-66d1e96674d2" 
size="1">

.
.
.

2013-12-13 02:40:09.194 | 2013-12-13 02:31:31,751 Request: GET 
http://127.0.0.1:8776/v1/e571f615fc6141808e95a5c3717a1208/volumes/7d7dc13f-99f3-4d0b-a3f7-66d1e96674d2
2013-12-13 02:40:09.194 | 2013-12-13 02:31:31,752 Request Headers: 
{'Content-Type': 'application/xml', 'Accept': 'application/xml', 
'X-Auth-Token': ''}
2013-12-13 02:40:09.194 | 2013-12-13 02:31:31,845 Response Status: 200
2013-12-13 02:40:09.194 | 2013-12-13 02:31:31,845 Nova request id: 
req-988fbe56-5c2c-42b9-8b61-bac560f73653
2013-12-13 02:40:09.194 | 2013-12-13 02:31:31,845 Response Headers: 
{'content-length': '798', 'content-location': 
u'http://127.0.0.1:8776/v1/e571f615fc6141808e95a5c3717a1208/volumes/7d7dc13f-99f3-4d0b-a3f7-66d1e96674d2',
 'date': 'Fri, 13 Dec 2013 02:31:31 GMT', 'content-type': 'application/xml', 
'connection': 'close'}
2013-12-13 02:40:09.194 | 2013-12-13 02:31:31,845 Response Body: 
2013-12-13 02:40:09.194 | http://docs.openstack.org/volume/ext/volume_image_metadata/api/v1";
 xmlns:atom="http://www.w3.org/2005/Atom"; 
xmlns="http://docs.openstack.org/volume/api/v1"; status="detaching" 
display_name="test_detach" availability_zone="nova" bootable="false" 
created_at="2013-12-13 02:27:12" display_description="None" volume_type="None" 
snapshot_id="None" source_volid="None" 
id="7d7dc13f-99f3-4d0b-a3f7-66d1e96674d2" size="1">Falserw
2013-12-13 02:40:09.195 | }}}
2013-12-13 02:40:09.195 | 
2013-12-13 02:40:09.195 | Traceback (most recent call last):
2013-12-13 02:40:09.195 |   File 
"tempest/api/compute/v3/servers/test_server_rescue.py", line 78, in _detach
2013-12-13 02:40:09.196 | 'available')
2013-12-13 02:40:09.196 |   File 
"tempest/services/volume/xml/volumes_client.py", line 191, in 
wait_for_volume_status
2013-12-13 02:40:09.196 | raise exceptions.TimeoutException(message)
2013-12-13 02:40:09.196 | TimeoutException: Request timed out
2013-12-13 02:40:09.197 | 

[Yahoo-eng-team] [Bug 1260575] [NEW] security group quota usage is wrong when the security group is deleted by admin

2013-12-12 Thread Liyingjun
Public bug reported:

The quota usage for security group do not decrease when the security
group is deleted by admin.

Use the attachment script to reproduce, here is what i got:

+ new_user
+ ATTEMPT_ID=27449
+ TENANT_NAME=tanant-27449
++ awk '/id/{print $4}'
++ keystone tenant-create --name tanant-27449
+ TENANT_ID=8e37a593f3234b6582f9e436a1b044c4
+ USER_NAME=user-27449
++ awk '/id/{print $4}'
++ keystone user-create --name user-27449 --tenant-id 
8e37a593f3234b6582f9e436a1b044c4 --pass secret
+ USER_ID=373d946636324d53ba6161e065f68572
+ as_user nova quota-show
+ OS_USERNAME=user-27449
+ OS_TENANT_NAME=tanant-27449
+ OS_PASSWORD=secret
+ nova quota-show
+-+---+
| Quota   | Limit |
+-+---+
| instances   | 10|
| cores   | 20|
| ram | 51200 |
| floating_ips| 10|
| fixed_ips   | -1|
| metadata_items  | 128   |
| injected_files  | 5 |
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes| 255   |
| key_pairs   | 100   |
| security_groups | 10|
| security_group_rules| 20|
+-+---+
+ (( i=0 ))
+ (( i<10 ))
+ echo '** Creating security group sec-27449-0 by user-27449 **'
** Creating security group sec-27449-0 by user-27449 **
+ as_user nova secgroup-create sec-27449-0 sec-27449-0
+ OS_USERNAME=user-27449
+ OS_TENANT_NAME=tanant-27449
+ OS_PASSWORD=secret
+ nova secgroup-create sec-27449-0 sec-27449-0
++-+-+
| Id | Name| Description |
++-+-+
| 39 | sec-27449-0 | sec-27449-0 |
++-+-+
+ (( i++ ))
+ (( i<10 ))
+ echo '** Creating security group sec-27449-1 by user-27449 **'
** Creating security group sec-27449-1 by user-27449 **
+ as_user nova secgroup-create sec-27449-1 sec-27449-1
+ OS_USERNAME=user-27449
+ OS_TENANT_NAME=tanant-27449
+ OS_PASSWORD=secret
+ nova secgroup-create sec-27449-1 sec-27449-1
++-+-+
| Id | Name| Description |
++-+-+
| 40 | sec-27449-1 | sec-27449-1 |
++-+-+
+ (( i++ ))
+ (( i<10 ))
+ echo '** Creating security group sec-27449-2 by user-27449 **'
** Creating security group sec-27449-2 by user-27449 **
+ as_user nova secgroup-create sec-27449-2 sec-27449-2
+ OS_USERNAME=user-27449
+ OS_TENANT_NAME=tanant-27449
+ OS_PASSWORD=secret
+ nova secgroup-create sec-27449-2 sec-27449-2
++-+-+
| Id | Name| Description |
++-+-+
| 41 | sec-27449-2 | sec-27449-2 |
++-+-+
+ (( i++ ))
+ (( i<10 ))
+ echo '** Creating security group sec-27449-3 by user-27449 **'
** Creating security group sec-27449-3 by user-27449 **
+ as_user nova secgroup-create sec-27449-3 sec-27449-3
+ OS_USERNAME=user-27449
+ OS_TENANT_NAME=tanant-27449
+ OS_PASSWORD=secret
+ nova secgroup-create sec-27449-3 sec-27449-3
++-+-+
| Id | Name| Description |
++-+-+
| 42 | sec-27449-3 | sec-27449-3 |
++-+-+
+ (( i++ ))
+ (( i<10 ))
+ echo '** Creating security group sec-27449-4 by user-27449 **'
** Creating security group sec-27449-4 by user-27449 **
+ as_user nova secgroup-create sec-27449-4 sec-27449-4
+ OS_USERNAME=user-27449
+ OS_TENANT_NAME=tanant-27449
+ OS_PASSWORD=secret
+ nova secgroup-create sec-27449-4 sec-27449-4
++-+-+
| Id | Name| Description |
++-+-+
| 43 | sec-27449-4 | sec-27449-4 |
++-+-+
+ (( i++ ))
+ (( i<10 ))
+ echo '** Creating security group sec-27449-5 by user-27449 **'
** Creating security group sec-27449-5 by user-27449 **
+ as_user nova secgroup-create sec-27449-5 sec-27449-5
+ OS_USERNAME=user-27449
+ OS_TENANT_NAME=tanant-27449
+ OS_PASSWORD=secret
+ nova secgroup-create sec-27449-5 sec-27449-5
++-+-+
| Id | Name| Description |
++-+-+
| 44 | sec-27449-5 | sec-27449-5 |
++-+-+
+ (( i++ ))
+ (( i<10 ))
+ echo '** Creating security group sec-27449-6 by user-27449 **'
** Creating security group sec-27449-6 by user-27449 **
+ as_user nova secgroup-create sec-27449-6 sec-27449-6
+ OS_USERNAME=user-27449
+ OS_TENANT_NAME=tanant-27449
+ OS_PASSWORD=secret
+ nova secgroup-create sec-27449-6 sec-27449-6
++-+-+
| Id | Name| Description |
++-+-+
| 45 | sec-27449-6 | sec-27449-6 |
++-+-+
+ (( i++ ))
+ (( i<10 ))
+ echo '** Creating security group sec-27449-7 by user-27449 **'
** Creating security group sec-27449-7 by user-27449 **
+ as_user nova secgroup-create sec-27449-7 sec-27449-7
+

[Yahoo-eng-team] [Bug 1260495] Re: Setting autodoc_tree_index_modules makes documentation builds fail

2013-12-12 Thread David Stanek
Adding Keystone to make sure we update our code to use this when
committed and remove the temporary extension.

** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1260495

Title:
  Setting autodoc_tree_index_modules makes documentation builds fail

Status in OpenStack Identity (Keystone):
  New
Status in Python Build Reasonableness:
  New

Bug description:
  The arguments originally being passed into sphinx.apidoc specified '.'
  as the path to index. Unfortunately this includes the setup.py module.
  Sphinx dies while trying to process the setup.rst likely because the
  setup.py module calls setuptools.setup() when imported causing some
  sort of recursion. The final result is something like:

2013-12-08 21:08:12.088 | reading sources... [ 80%] api/setup
2013-12-08 21:08:12.100 | /usr/lib/python2.7/distutils/dist.py:267: 
UserWarning: Unknown distribution option: 'setup_requires'
2013-12-08 21:08:12.101 |   warnings.warn(msg)
2013-12-08 21:08:12.102 | /usr/lib/python2.7/distutils/dist.py:267: 
UserWarning: Unknown distribution option: 'pbr'
2013-12-08 21:08:12.102 |   warnings.warn(msg)
2013-12-08 21:08:12.103 | usage: setup.py [global_opts] cmd1 [cmd1_opts] 
[cmd2 [cmd2_opts] ...]
2013-12-08 21:08:12.103 |or: setup.py --help [cmd1 cmd2 ...]
2013-12-08 21:08:12.104 |or: setup.py --help-commands
2013-12-08 21:08:12.104 |or: setup.py cmd --help
2013-12-08 21:08:12.104 | 
2013-12-08 21:08:12.105 | error: invalid command 'build_sphinx'
2013-12-08 21:08:12.622 | ERROR: InvocationError: 
'/home/jenkins/workspace/gate-keystone-docs/.tox/venv/bin/python setup.py 
build_sphinx'

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1260495/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260139] Re: VMWARE: Unable to spawn instances from sparse/ide images

2013-12-12 Thread Tracy Jones
** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260139

Title:
  VMWARE: Unable to spawn instances from sparse/ide images

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Branch: stable/havana

  Traceback: http://paste.openstack.org/show/54855/

  Steps to reprodude:
  Upload a ide/sparse type image to glance.
  Spawn an instance from that image

  Actual Result:
  Failed to spawn an instance

  Link to image:
  http://partnerweb.vmware.com/programs/vmdkimage/cirros-0.3.0-i386-disk.vmdk

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260139/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260550] [NEW] fix core_plugin path in nicira README

2013-12-12 Thread Aaron Rosen
Public bug reported:

fix core_plugin path in nicira README

** Affects: neutron
 Importance: Low
 Assignee: Aaron Rosen (arosen)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Aaron Rosen (arosen)

** Changed in: neutron
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1260550

Title:
  fix core_plugin path in nicira README

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  fix core_plugin path in nicira README

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1260550/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257815] Re: Internal server error while deleting subnet(can not find the rows in ipavailabilityranges table)

2013-12-12 Thread Sean Dague
not a tempest bug

** Changed in: tempest
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1257815

Title:
  Internal server error while deleting subnet(can not find the rows  in
  ipavailabilityranges  table)

Status in OpenStack Neutron (virtual network service):
  In Progress
Status in Tempest:
  Invalid

Bug description:
  when trying to delete a subnet, sometimes the following error comes out.
  Icehouse, Database  in use is DB2, but I guess it might happen for other 
databases too.

  
  
  2013-12-04 03:49:48.275 26604 TRACE neutron.plugins.ml2.plugin
  2013-12-04 03:49:48.277 26604 ERROR neutron.api.v2.resource 
[req-e8e78c50-25b0-4e19-b5f0-796041d7b464 f53f4f5b40154ad6b1ec1ac08f88ecf2 
b93ff0
  8b44da407185a26033768101f5] NT-C3C9C57 delete failed
  2013-12-04 03:49:48.277 26604 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2013-12-04 03:49:48.277 26604 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/api/v2/resource.py", line 84, in 
resource
  2013-12-04 03:49:48.277 26604 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2013-12-04 03:49:48.277 26604 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/api/v2/base.py", line 432, in delete
  2013-12-04 03:49:48.277 26604 TRACE neutron.api.v2.resource 
obj_deleter(request.context, id, **kwargs)
  2013-12-04 03:49:48.277 26604 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/plugins/ml2/plugin.py", line 443
  , in delete_network
  2013-12-04 03:49:48.277 26604 TRACE neutron.api.v2.resource 
self.delete_subnet(context, subnet.id)
  2013-12-04 03:49:48.277 26604 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/plugins/ml2/plugin.py", line 530, in 
delete_subnet
  2013-12-04 03:49:48.277 26604 TRACE neutron.api.v2.resource break
  2013-12-04 03:49:48.277 26604 TRACE neutron.api.v2.resource   File 
"/usr/lib64/python2.6/site-packages/sqlalchemy/orm/session.py", line 449,in 
__exit__
  2013-12-04 03:49:48.277 26604 TRACE neutron.api.v2.resource self.commit()
  2013-12-04 03:49:48.277 26604 TRACE neutron.api.v2.resource   File 
"/usr/lib64/python2.6/site-packages/sqlalchemy/orm/session.py", line 361, in 
commit
  2013-12-04 03:49:48.277 26604 TRACE neutron.api.v2.resource 
self._prepare_impl()
  2013-12-04 03:49:48.277 26604 TRACE neutron.api.v2.resource   File 
"/usr/lib64/python2.6/site-packages/sqlalchemy/orm/session.py", line 340,in 
_prepare_impl
  2013-12-04 03:49:48.277 26604 TRACE neutron.api.v2.resource 
self.session.flush()
  2013-12-04 03:49:48.277 26604 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/openstack/common/db/sqlalchemy/session.py",
 line 545, in _wrap
  2013-12-04 03:49:48.277 26604 TRACE neutron.api.v2.resource raise 
exception.DBError(e)
  2013-12-04 03:49:48.277 26604 TRACE neutron.api.v2.resource DBError: (Error) 
ibm_db_dbi::Error:
  2013-12-04 03:49:48.277 26604 TRACE neutron.api.v2.resource Error 1: 
[IBM][CLI Driver][DB2/LINUXX8664] SQL0100W  No row was found for FETCH,UPDATE 
or DELETE; or the result of a query is an empty table.  SQLSTATE=02000 
SQLCODE=100
  2013-12-04 03:49:48.277 26604 TRACE neutron.api.v2.resource  'DELETE FROM 
ipavailabilityranges WHERE ipavailabilityranges.allocation_pool_id= ? AND 
ipavailabilityranges.first_ip = ? AND ipavailabilityranges.last_ip = ?' 
(('e376f33d-a224-4468-91eb-82191e158726', '10.100.0.2', '10.100.0.2'), 
('e376f33d-a224-4468-91eb-82191e158726', '10.100.0.4', '10.100.0.14'))

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1257815/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257070] Re: test_glance_timeout flakey fail

2013-12-12 Thread Sean Dague
** No longer affects: tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1257070

Title:
  test_glance_timeout flakey fail

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Transient fail for
  tempest.cli.simple_read_only.test_glance.SimpleReadOnlyGlanceClientTest
  test_glance_timeout

  http://logs.openstack.org/66/55766/3/gate/gate-tempest-devstack-vm-
  postgres-full/a807434/testr_results.html.gz

  ft254.6: 
tempest.cli.simple_read_only.test_glance.SimpleReadOnlyGlanceClientTest.test_glance_timeout_StringException:
 Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  2013-11-29 07:57:20,345 running: '/usr/local/bin/glance --os-username admin 
--os-tenant-name admin --os-password secret --os-auth-url 
http://127.0.0.1:5000/v2.0/  --timeout 15 image-list '
  2013-11-29 07:57:37,633 output of /usr/local/bin/glance --os-username admin 
--os-tenant-name admin --os-password secret --os-auth-url 
http://127.0.0.1:5000/v2.0/  --timeout 15 image-list :

  2013-11-29 07:57:37,635 error output of /usr/local/bin/glance --os-username 
admin --os-tenant-name admin --os-password secret --os-auth-url 
http://127.0.0.1:5000/v2.0/  --timeout 15 image-list :
  Error communicating with http://127.0.0.1:9292 timed out
  }}}

  Traceback (most recent call last):
File "tempest/cli/simple_read_only/test_glance.py", line 89, in 
test_glance_timeout
  self.glance('image-list', flags='--timeout %d' % CONF.cli.timeout)
File "tempest/cli/__init__.py", line 81, in glance
  'glance', action, flags, params, admin, fail_ok)
File "tempest/cli/__init__.py", line 110, in cmd_with_auth
  return self.cmd(cmd, action, flags, params, fail_ok)
File "tempest/cli/__init__.py", line 132, in cmd
  stderr=result_err)
  CommandFailed: Command '['/usr/local/bin/glance', '--os-username', 'admin', 
'--os-tenant-name', 'admin', '--os-password', 'secret', '--os-auth-url', 
'http://127.0.0.1:5000/v2.0/', '--timeout', '15', 'image-list']' returned 
non-zero exit status 1

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1257070/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1217432] Re: timeout on AuthorizationTestJSON

2013-12-12 Thread Sean Dague
** No longer affects: tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1217432

Title:
  timeout on AuthorizationTestJSON

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  http://logs.openstack.org/59/43459/3/gate/gate-tempest-devstack-vm-
  full/e57504d/console.html

  2013-08-27 14:39:29.384 | 
==
  2013-08-27 14:39:29.384 | FAIL: setUpClass 
(tempest.api.compute.test_authorization.AuthorizationTestJSON)
  2013-08-27 14:39:29.385 | setUpClass 
(tempest.api.compute.test_authorization.AuthorizationTestJSON)
  2013-08-27 14:39:29.385 | 
--
  2013-08-27 14:39:29.385 | _StringException: Traceback (most recent call last):
  2013-08-27 14:39:29.386 |   File "tempest/api/compute/test_authorization.py", 
line 66, in setUpClass
  2013-08-27 14:39:29.386 | 
cls.images_client.wait_for_image_status(image_id, 'ACTIVE')
  2013-08-27 14:39:29.386 |   File 
"tempest/services/compute/json/images_client.py", line 110, in 
wait_for_image_status
  2013-08-27 14:39:29.386 | raise exceptions.TimeoutException
  2013-08-27 14:39:29.386 | TimeoutException: Request timed out
  2013-08-27 14:39:29.386 | 
  2013-08-27 14:39:29.387 | 
  2013-08-27 14:39:29.387 | 
==
  2013-08-27 14:39:29.388 | FAIL: process-returncode
  2013-08-27 14:39:29.388 | process-returncode
  2013-08-27 14:39:29.416 | 
--
  2013-08-27 14:39:29.416 | _StringException: Binary content:
  2013-08-27 14:39:29.416 |   traceback (test/plain; charset="utf8")
  2013-08-27 14:39:29.416 | 
  2013-08-27 14:39:29.417 | 
  2013-08-27 14:39:29.417 | 
--
  2013-08-27 14:39:29.418 | Ran 1152 tests in 968.915s
  2013-08-27 14:39:29.418 | 
  2013-08-27 14:39:29.419 | FAILED (failures=2, skipped=67)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1217432/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1225024] Re: tempest.api.compute.admin.test_hosts.HostsAdminTestXML.test_list_hosts_with_zone[gate] unexpected conductor service

2013-12-12 Thread Sean Dague
Can't find it in log stash, assuming it's fixed

** Changed in: nova
   Status: Incomplete => Fix Released

** Changed in: tempest
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1225024

Title:
  
tempest.api.compute.admin.test_hosts.HostsAdminTestXML.test_list_hosts_with_zone[gate]
  unexpected conductor service

Status in OpenStack Compute (Nova):
  Fix Released
Status in Tempest:
  Fix Released

Bug description:
  2013-09-13 03:44:02.095 | 
==
  2013-09-13 03:44:02.095 | FAIL: 
tempest.api.compute.admin.test_hosts.HostsAdminTestXML.test_list_hosts_with_zone[gate]
  2013-09-13 03:44:02.096 | 
tempest.api.compute.admin.test_hosts.HostsAdminTestXML.test_list_hosts_with_zone[gate]
  2013-09-13 03:44:02.096 | 
--
  2013-09-13 03:44:02.096 | _StringException: Empty attachments:
  2013-09-13 03:44:02.097 |   stderr
  2013-09-13 03:44:02.097 |   stdout
  2013-09-13 03:44:02.097 | 
  2013-09-13 03:44:02.098 | pythonlogging:'': {{{
  2013-09-13 03:44:02.098 | 2013-09-13 03:27:35,206 Request: GET 
http://127.0.0.1:8774/v2/b5dc34c995d94389b0b2a5f18851aca6/os-hosts
  2013-09-13 03:44:02.098 | 2013-09-13 03:27:35,487 Response Status: 200
  2013-09-13 03:44:02.099 | 2013-09-13 03:27:35,488 Nova request id: 
req-10082857-8c33-4472-b511-5a5945cc2da4
  2013-09-13 03:44:02.099 | 2013-09-13 03:27:35,488 Request: GET 
http://127.0.0.1:8774/v2/b5dc34c995d94389b0b2a5f18851aca6/os-hosts?zone=internal
  2013-09-13 03:44:02.099 | 2013-09-13 03:27:35,513 Response Status: 200
  2013-09-13 03:44:02.099 | 2013-09-13 03:27:35,513 Nova request id: 
req-29968204-ebbb-468c-ab8c-d25654550a97
  2013-09-13 03:44:02.100 | }}}
  2013-09-13 03:44:02.100 | 
  2013-09-13 03:44:02.100 | Traceback (most recent call last):
  2013-09-13 03:44:02.101 |   File "tempest/api/compute/admin/test_hosts.py", 
line 51, in test_list_hosts_with_zone
  2013-09-13 03:44:02.101 | self.assertIn(host, hosts)
  2013-09-13 03:44:02.101 |   File 
"/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 328, in 
assertIn
  2013-09-13 03:44:02.101 | self.assertThat(haystack, Contains(needle))
  2013-09-13 03:44:02.102 |   File 
"/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 417, in 
assertThat
  2013-09-13 03:44:02.102 | raise MismatchError(matchee, matcher, mismatch, 
verbose)
  2013-09-13 03:44:02.103 | MismatchError: {u'service': u'conductor', 
u'host_name': u'devstack-precise-hpcloud-az3-265828', u'zone': u'internal'} not 
in [{u'service': u'network', u'host_name': 
u'devstack-precise-hpcloud-az3-265828', u'zone': u'internal'}, {u'service': 
u'cert', u'host_name': u'devstack-precise-hpcloud-az3-265828', u'zone': 
u'internal'}, {u'service': u'scheduler', u'host_name': 
u'devstack-precise-hpcloud-az3-265828', u'zone': u'internal'}]
  2013-09-13 03:44:02.103 |

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1225024/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1249889] Re: tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern[compute, image, volume] failed

2013-12-12 Thread Sean Dague
not a tempest bug, this looks to be a nova bug with some sort of race on
nw attach

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: tempest
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1249889

Title:
  
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern[compute,image,volume]
  failed

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Invalid

Bug description:
  Traceback (most recent call last):
File "tempest/scenario/test_volume_boot_pattern.py", line 144, in 
test_volume_boot_pattern
  keypair)
File "tempest/scenario/test_volume_boot_pattern.py", line 93, in 
_ssh_to_server
  server.add_floating_ip(floating_ip)
File "/opt/stack/new/python-novaclient/novaclient/v1_1/servers.py", line 
108, in add_floating_ip
  self.manager.add_floating_ip(self, address, fixed_address)
File "/opt/stack/new/python-novaclient/novaclient/v1_1/servers.py", line 
465, in add_floating_ip
  self._action('addFloatingIp', server, {'address': address})
File "/opt/stack/new/python-novaclient/novaclient/v1_1/servers.py", line 
993, in _action
  return self.api.client.post(url, body=body)
File "/opt/stack/new/python-novaclient/novaclient/client.py", line 234, in 
post
  return self._cs_request(url, 'POST', **kwargs)
File "/opt/stack/new/python-novaclient/novaclient/client.py", line 213, in 
_cs_request
  **kwargs)
File "/opt/stack/new/python-novaclient/novaclient/client.py", line 195, in 
_time_request
  resp, body = self.request(url, method, **kwargs)
File "/opt/stack/new/python-novaclient/novaclient/client.py", line 189, in 
request
  raise exceptions.from_response(resp, body, url, method)
  BadRequest: No nw_info cache associated with instance (HTTP 400) (Request-ID: 
req-4e6ed4cd-d2e8-42a2-aae6-f0a3820f71f5)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1249889/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260440] Re: nova-compute host is added to scheduling pool before Neutron can bind network ports on said host

2013-12-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/61608
Committed: 
https://git.openstack.org/cgit/openstack/tripleo-incubator/commit/?id=661884b5c7a47d01171c680c83b601d3c9a15d9f
Submitter: Jenkins
Branch:master

commit 661884b5c7a47d01171c680c83b601d3c9a15d9f
Author: Clint Byrum 
Date:   Wed Dec 11 15:33:01 2013 -0800

Wait for Neutron L2 Agent on Compute Node

The L2 Agent sometimes does not register until later on in the
deployment for some reason. This is just a work-around until that bug
can be properly understood.

Change-Id: Idbbc977aa2e13f2026de05ae7e6571bc9dd0a498
Closes-Bug: #1260440


** Changed in: tripleo
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1260440

Title:
  nova-compute host is added to scheduling pool before Neutron can bind
  network ports on said host

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Confirmed
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  This is a race condition.

  Given a cloud with 0 compute nodes available, on a compute node:
  * Start up neutron-openvswitch-agent
  * Start up nova-compute
  * nova boot an instance

  Scenario 1:
  * neutron-openvswitch-agent registers with Neutron before nova tries to boot 
instance
  * port is bound to agent
  * instance boots with correct networking

  Scenario 2:
  * nova schedules instance to host before neutron-openvswitch-agent is 
registered with Neutron
  * nova instance fails with vif_type=binding_failed
  * instance is in ERROR state

  I would expect that Nova would not try to schedule instances on
  compute hosts that are not ready.

  Please also see this mailing list thread for more info:

  http://lists.openstack.org/pipermail/openstack-
  dev/2013-December/022084.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1260440/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1217734] Re: FAIL: setUpClass (tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML Unauthorized)

2013-12-12 Thread Sean Dague
if you can't find a gate race in logstash, I'm calling it fixed

** Changed in: nova
   Status: Incomplete => Fix Released

** Changed in: python-keystoneclient
   Status: Incomplete => Fix Released

** Changed in: tempest
   Status: Incomplete => Fix Released

** Changed in: python-cinderclient
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1217734

Title:
  FAIL: setUpClass
  (tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML
  Unauthorized)

Status in OpenStack Compute (Nova):
  Fix Released
Status in Python client library for Cinder:
  Fix Released
Status in Python client library for Keystone:
  Fix Released
Status in Tempest:
  Fix Released

Bug description:
  http://logs.openstack.org/44/43444/4/check/gate-grenade-devstack-
  vm/4f78566/console.html

  2013-08-28 06:32:58.510 | 
==
  2013-08-28 06:32:58.511 | FAIL: setUpClass 
(tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML)
  2013-08-28 06:32:58.511 | setUpClass 
(tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML)
  2013-08-28 06:32:58.511 | 
--
  2013-08-28 06:32:58.512 | _StringException: Traceback (most recent call last):
  2013-08-28 06:32:58.512 |   File 
"tempest/api/compute/servers/test_server_rescue.py", line 52, in setUpClass
  2013-08-28 06:32:58.512 | 'test_attach')
  2013-08-28 06:32:58.512 |   File 
"tempest/services/compute/xml/volumes_extensions_client.py", line 114, in 
create_volume
  2013-08-28 06:32:58.513 | self.headers)
  2013-08-28 06:32:58.513 |   File "tempest/common/rest_client.py", line 260, 
in post
  2013-08-28 06:32:58.513 | return self.request('POST', url, headers, body)
  2013-08-28 06:32:58.514 |   File "tempest/common/rest_client.py", line 388, 
in request
  2013-08-28 06:32:58.514 | resp, resp_body)
  2013-08-28 06:32:58.514 |   File "tempest/common/rest_client.py", line 430, 
in _error_checker
  2013-08-28 06:32:58.515 | raise exceptions.Unauthorized()
  2013-08-28 06:32:58.515 | Unauthorized: Unauthorized

  http://logs.openstack.org/23/43723/4/gate/gate-tempest-devstack-vm-
  full/3fefc90/console.html

  The real happening time is close to:
  2013-08-28 06:24:24.882 | setUpClass 
(tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1217734/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260538] [NEW] nova-manage useage exposes action-args

2013-12-12 Thread Scott Devoid
Public bug reported:

The nova-manage command exposes the action_args options during the usage
output for command.

E.g.
$ nova-manage network modify -h
usage: nova-manage network modify [-h] [--fixed_range ]
  [--project ] [--host ]
  [--disassociate-project]
  [--disassociate-host]
  [action_args [action_args ...]]

positional arguments:
  action_args



This can cause confusion as users naturally expect there to be more
"actions" on commands like "modify". Even in straightforward cases, this
positional argument leaks into usage.

$ nova-manage db version -h
usage: nova-manage db version [-h] [action_args [action_args ...]]

positional arguments:
  action_args

Please consider suppressing documentation on action_args. In addition,
expose the __doc__ strings for these functions, which is done in the
nova command.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit nova-manage user-experience ux

** Project changed: barbican => nova

** Tags added: low-hanging-fruit user-experience

** Tags added: nova-manage ux

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260538

Title:
  nova-manage useage exposes action-args

Status in OpenStack Compute (Nova):
  New

Bug description:
  The nova-manage command exposes the action_args options during the
  usage output for command.

  E.g.
  $ nova-manage network modify -h
  usage: nova-manage network modify [-h] [--fixed_range ]
[--project ] [--host ]
[--disassociate-project]
[--disassociate-host]
[action_args [action_args ...]]

  positional arguments:
action_args

  

  This can cause confusion as users naturally expect there to be more
  "actions" on commands like "modify". Even in straightforward cases,
  this positional argument leaks into usage.

  $ nova-manage db version -h
  usage: nova-manage db version [-h] [action_args [action_args ...]]

  positional arguments:
action_args

  Please consider suppressing documentation on action_args. In addition,
  expose the __doc__ strings for these functions, which is done in the
  nova command.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260538/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260538] [NEW] nova-manage useage exposes action-args

2013-12-12 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

The nova-manage command exposes the action_args options during the usage
output for command.

E.g.
$ nova-manage network modify -h
usage: nova-manage network modify [-h] [--fixed_range ]
  [--project ] [--host ]
  [--disassociate-project]
  [--disassociate-host]
  [action_args [action_args ...]]

positional arguments:
  action_args



This can cause confusion as users naturally expect there to be more
"actions" on commands like "modify". Even in straightforward cases, this
positional argument leaks into usage.

$ nova-manage db version -h
usage: nova-manage db version [-h] [action_args [action_args ...]]

positional arguments:
  action_args

Please consider suppressing documentation on action_args. In addition,
expose the __doc__ strings for these functions, which is done in the
nova command.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
nova-manage useage exposes action-args
https://bugs.launchpad.net/bugs/1260538
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1217432] Re: timeout on AuthorizationTestJSON

2013-12-12 Thread Sean Dague
This is a glance call that's failing to allocate the image

** Also affects: glance
   Importance: Undecided
   Status: New

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1217432

Title:
  timeout on AuthorizationTestJSON

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  New

Bug description:
  http://logs.openstack.org/59/43459/3/gate/gate-tempest-devstack-vm-
  full/e57504d/console.html

  2013-08-27 14:39:29.384 | 
==
  2013-08-27 14:39:29.384 | FAIL: setUpClass 
(tempest.api.compute.test_authorization.AuthorizationTestJSON)
  2013-08-27 14:39:29.385 | setUpClass 
(tempest.api.compute.test_authorization.AuthorizationTestJSON)
  2013-08-27 14:39:29.385 | 
--
  2013-08-27 14:39:29.385 | _StringException: Traceback (most recent call last):
  2013-08-27 14:39:29.386 |   File "tempest/api/compute/test_authorization.py", 
line 66, in setUpClass
  2013-08-27 14:39:29.386 | 
cls.images_client.wait_for_image_status(image_id, 'ACTIVE')
  2013-08-27 14:39:29.386 |   File 
"tempest/services/compute/json/images_client.py", line 110, in 
wait_for_image_status
  2013-08-27 14:39:29.386 | raise exceptions.TimeoutException
  2013-08-27 14:39:29.386 | TimeoutException: Request timed out
  2013-08-27 14:39:29.386 | 
  2013-08-27 14:39:29.387 | 
  2013-08-27 14:39:29.387 | 
==
  2013-08-27 14:39:29.388 | FAIL: process-returncode
  2013-08-27 14:39:29.388 | process-returncode
  2013-08-27 14:39:29.416 | 
--
  2013-08-27 14:39:29.416 | _StringException: Binary content:
  2013-08-27 14:39:29.416 |   traceback (test/plain; charset="utf8")
  2013-08-27 14:39:29.416 | 
  2013-08-27 14:39:29.417 | 
  2013-08-27 14:39:29.417 | 
--
  2013-08-27 14:39:29.418 | Ran 1152 tests in 968.915s
  2013-08-27 14:39:29.418 | 
  2013-08-27 14:39:29.419 | FAILED (failures=2, skipped=67)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1217432/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1161988] Re: Flavor naming shouldn't include "m1"

2013-12-12 Thread Dean Troyer
** Changed in: devstack
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1161988

Title:
  Flavor naming shouldn't include "m1"

Status in devstack - openstack dev environments:
  Won't Fix
Status in OpenStack Dashboard (Horizon):
  Won't Fix
Status in OpenStack Compute (Nova):
  Won't Fix
Status in Python client library for heat:
  Confirmed
Status in Python client library for Nova:
  Won't Fix
Status in Tempest:
  Won't Fix

Bug description:
  Flavor naming shouldn't include "m1"

  ENV: devstack trunk / nova 814e109845b3b2546f60e3f537dcfe32893906a3
  (grizzly)

  The default flavors are now:
  m1.nano 
  m1.micro
  m1.tiny 
  m1.small 
  m1.medium 
  m1.large 
  m1.xlarge

  We are propagating AWS "m1" designation. This is not useful
  information to the OpenStack administrator or user, and it's actually
  possible misinformation as the "m1" on AWS suggests a specific
  generation of hardware.

  POSSIBLE SOLUTION:

  Drop the "m1":
  nano 
  micro
  tiny 
  small 
  medium 
  large 
  xlarge

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1161988/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260530] [NEW] Instances appear pingable without an ingress icmp security rule

2013-12-12 Thread Ed Bak
Public bug reported:

Instances appear to be pingable for a short time after a floating ip is
associated even though there is no ingress icmp security group rule.
tcpdump of the instance's tap device shows that the instance isn't
actually responding to the ping.  It appears that the router gateway
interface is responding to the ping for a short time.  You can reproduce
this by booting an instance using a security group with only egress
rules.  Allocate a floating ip address. ping the ip address ( nothing
will happen yet ).  Associate the ip with the instance.  The ping will
begin responding.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1260530

Title:
  Instances appear pingable without an ingress icmp security rule

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Instances appear to be pingable for a short time after a floating ip
  is associated even though there is no ingress icmp security group
  rule.  tcpdump of the instance's tap device shows that the instance
  isn't actually responding to the ping.  It appears that the router
  gateway interface is responding to the ping for a short time.  You can
  reproduce this by booting an instance using a security group with only
  egress rules.  Allocate a floating ip address. ping the ip address (
  nothing will happen yet ).  Associate the ip with the instance.  The
  ping will begin responding.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1260530/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260528] [NEW] Metering dashboard. Marker could not be found (havana)

2013-12-12 Thread Roman Sokolkov
Public bug reported:

Hello,

I couldn't reopen this bug
https://bugs.launchpad.net/horizon/+bug/1247752 . And decided to create
new one.

I use latest havana release code, but also plunged into "Marker could
not be found" error in horizon logs.

[Thu Dec 12 22:49:15 2013] [error] Request returned failure status: 400
[Thu Dec 12 22:49:18 2013] [error] REQ: curl -i -X GET 
http://192.168.0.2:35357/v2.0/tenants?marker=tenant_marker&limit=21 -H 
"User-Agent: python-keystoneclient" -H "Forwarded: 
for=10.20.0.1;by=python-keystonece"
[Thu Dec 12 22:49:18 2013] [error] RESP: [400] {'date': 'Thu, 12 Dec 2013 
22:49:18 GMT', 'content-type': 'application/json', 'content-length': '88', 
'vary': 'X-Auth-Token'}
[Thu Dec 12 22:49:18 2013] [error] RESP BODY: {"error": {"message": "Marker 
could not be found", "code": 400, "title": "Bad Request"}}

"tenant_marker" value comes from
https://github.com/openstack/horizon/blob/stable/havana/openstack_dashboard/dashboards/admin/metering/views.py#L149

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1260528

Title:
  Metering dashboard. Marker could not be found (havana)

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Hello,

  I couldn't reopen this bug
  https://bugs.launchpad.net/horizon/+bug/1247752 . And decided to
  create new one.

  I use latest havana release code, but also plunged into "Marker could
  not be found" error in horizon logs.

  [Thu Dec 12 22:49:15 2013] [error] Request returned failure status: 400
  [Thu Dec 12 22:49:18 2013] [error] REQ: curl -i -X GET 
http://192.168.0.2:35357/v2.0/tenants?marker=tenant_marker&limit=21 -H 
"User-Agent: python-keystoneclient" -H "Forwarded: 
for=10.20.0.1;by=python-keystonece"
  [Thu Dec 12 22:49:18 2013] [error] RESP: [400] {'date': 'Thu, 12 Dec 2013 
22:49:18 GMT', 'content-type': 'application/json', 'content-length': '88', 
'vary': 'X-Auth-Token'}
  [Thu Dec 12 22:49:18 2013] [error] RESP BODY: {"error": {"message": "Marker 
could not be found", "code": 400, "title": "Bad Request"}}

  "tenant_marker" value comes from
  
https://github.com/openstack/horizon/blob/stable/havana/openstack_dashboard/dashboards/admin/metering/views.py#L149

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1260528/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260516] [NEW] PortInUse exception leaves orphaned ports

2013-12-12 Thread Robert Pothier
Public bug reported:

While bringing up 1000 VMs with Heat, some VMs are in error state.
After deleting the VMs, not all ports are removed.
The next time VMs are created, they fail due to no IP addresses left.


Note, initially there are 7 ports, no VMs
804 VMs are created, 811 ports

root@control01:/usr/share/pyshared/heat# nova list | grep ACT | wc -l
358
root@control01:/usr/share/pyshared/heat# neutron port-list | wc -l
811
root@control01:/usr/share/pyshared/heat# nova list | grep ERR | wc -l
270
root@control01:/usr/share/pyshared/heat# nova list | grep stack | wc -l
804

After deleting the VMs, 248 ports remain


root@control01:/usr/share/pyshared/heat# nova list | grep stack | wc -l
0
root@control01:/usr/share/pyshared/heat# neutron port-list | wc -l
248



2013-12-12 20:55:35.320 20945 ERROR nova.scheduler.filter_scheduler [req-844a11 
   f6-66f1-4fc7-9f3b-8bacbe57a04d e95097acd0b041558f33d07f720c1bd7 
354f17bf81924b278806c3e3798aa527] [instance: 
2a3eadcc-5230-4ed6-bf9f-9a82d43b91c3] Error from last host: compute134 
(node compute134.lab.cisco): [u'Traceback (most recent call last):\n', u'  
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1037, 
in _build_instance\nset_access_ip=set_access_ip)\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1410, in _sp   
 awn\nLOG.exception(_(\'Instance failed to spawn\'), instance=instance)\n', 
u'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 
1407, in _spawn\nblock_device_info)\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2063, 
in spawn\nadmin_pass=admin_password)\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py"
 , line 2412, in _create_image\ncontent=files, extra_md=extra_md, network_  
  info=network_info)\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/api/metadata/base.py", line 157, in 
__init__\ncfg = netutils.get_injected_network_t
emplate(network_info)\n', u'  File "/usr/lib/python2.7/dist-packages/nova/virt/ 
   netutils.py", line 74, in get_injected_network_template\nif not 
(network_info and template):\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/network/model.py", line 379, in 
__len__\nreturn self._sync_wrapper(fn, *args, **kwargs)\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/network/model.py", line 366, in 
_sync_wrapper\nself.wait()\n', u'  File "/usr/lib/python2.7/dist-p
ackages/nova/network/model.py", line 398, in wait\nself[:] = self._gt.wait( 
   )\n', u'  File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", 
line 168, in wait\nreturn self._exit_event.wait()\n', u'  File 
"/usr/lib/python2.7
 /dist-packages/eventlet/event.py", line 120, in wait\ncurrent.throw(*sel   
 f._exc)\n', u'  File 
"/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 194, in 
main\nresult = function(*args, **kwargs)\n', u'  File "/usr/
lib/python2.7/dist-packages/nova/compute/manager.py", line 1228, in _allocate_n 
   etwork_async\ndhcp_options=dhcp_options)\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/network/api.py", line 49, in 
wrapper\nres = f(self, context, *args, **kwargs)\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 243, 
in allocate_for_instance\nraise exception.Po
rtInUse(port_id=port_id)\n', u'PortInUse: Port 69e55016-b794-4dd9-b3f3-4e78336f 
   bd11 is still in use.\n']
 93 2013-12-12 20:55:35.321 20945 WARNING nova.scheduler.utils 
[req-844a11f6-66f1-4fc7-9f3b-8bacbe57a04d e95097acd0b041558f33d07f720c1bd7 
354f17bf81924b278806c3e3798aa527] Failed to scheduler_run_instance: No 
valid host was found. Exceeded max scheduling attempts 3 for instance 
2a3eadcc-5230-4ed6-bf9f-9a82d43b91c3
 94 2013-12-12 20:55:35.324 20945 WARNING nova.scheduler.utils 
[req-844a11f6-66f1-4fc7-9f3b-8bacbe57a04d e95097acd0b041558f33d07f720c1bd7 
354f17bf81924b278806c3e3798aa527] [instance: 
2a3eadcc-5230-4ed6-bf9f-9a82d43b91c3] Setting instance to ERROR state.

** Affects: neutron
 Importance: Undecided
 Status: New

** Attachment added: "server.log"
   https://bugs.launchpad.net/bugs/1260516/+attachment/3928318/+files/server.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1260516

Title:
  PortInUse exception leaves orphaned ports

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  While bringing up 1000 VMs with Heat, some VMs are in error state.
  After deleting the VMs, not all ports are removed.
  The next time VMs are created, they fail due to no IP addresses left.

  
  Note, initially there are 7 ports, no VMs
  804 VMs are created, 811 ports

  root@control01:/usr/share/pyshared/heat# nova list | grep 

[Yahoo-eng-team] [Bug 1250836] Re: Updating of instance metadata occasionally leads to a deadlock

2013-12-12 Thread Sean Dague
not a tempest bug

** No longer affects: tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1250836

Title:
  Updating of instance metadata occasionally leads to a deadlock

Status in OpenStack Compute (Nova):
  New

Bug description:
  During the tempest tests run I got the following error:

  2013-11-13 10:09:18.814 ERROR nova.api.openstack 
[req-a3172f97-0d7c-4f8b-a7a5-bec6aad2b549 
ServerMetadataTestJSON-tempest-1285971638-user 
ServerMetadataTestJSON-tempest-1285971638-tenant] Caught error: 
(OperationalError) (1213, 'Deadlock found when trying to get lock; try 
restarting transaction') 'INSERT INTO instance_metadata (created_at, 
updated_at, deleted_at, deleted, `key`, value, instance_uuid) VALUES (%s, %s, 
%s, %s, %s, %s, %s)' (datetime.datetime(2013, 11, 13, 10, 9, 18, 811419), None, 
None, 0, 'key3', 'value3', 'ba645a19-78c5-439d-9408-68f413c200f4')
  2013-11-13 10:09:18.814 22804 TRACE nova.api.openstack Traceback (most recent 
call last):
  2013-11-13 10:09:18.814 22804 TRACE nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/__init__.py", line 119, in __call__
  2013-11-13 10:09:18.814 22804 TRACE nova.api.openstack return 
req.get_response(self.application)
  2013-11-13 10:09:18.814 22804 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1296, in send
  2013-11-13 10:09:18.814 22804 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2013-11-13 10:09:18.814 22804 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1260, in 
call_application
  2013-11-13 10:09:18.814 22804 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2013-11-13 10:09:18.814 22804 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2013-11-13 10:09:18.814 22804 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-11-13 10:09:18.814 22804 TRACE nova.api.openstack   File 
"/opt/stack/new/python-keystoneclient/keystoneclient/middleware/auth_token.py", 
line 571, in __call__
  2013-11-13 10:09:18.814 22804 TRACE nova.api.openstack return 
self.app(env, start_response)
  2013-11-13 10:09:18.814 22804 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2013-11-13 10:09:18.814 22804 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-11-13 10:09:18.814 22804 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2013-11-13 10:09:18.814 22804 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-11-13 10:09:18.814 22804 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/routes/middleware.py", line 131, in __call__
  2013-11-13 10:09:18.814 22804 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2013-11-13 10:09:18.814 22804 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2013-11-13 10:09:18.814 22804 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-11-13 10:09:18.814 22804 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
  2013-11-13 10:09:18.814 22804 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2013-11-13 10:09:18.814 22804 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
  2013-11-13 10:09:18.814 22804 TRACE nova.api.openstack return 
self.func(req, *args, **kwargs)
  2013-11-13 10:09:18.814 22804 TRACE nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/wsgi.py", line 939, in __call__
  2013-11-13 10:09:18.814 22804 TRACE nova.api.openstack content_type, 
body, accept)
  2013-11-13 10:09:18.814 22804 TRACE nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/wsgi.py", line 998, in _process_stack
  2013-11-13 10:09:18.814 22804 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
  2013-11-13 10:09:18.814 22804 TRACE nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/wsgi.py", line 1079, in dispatch
  2013-11-13 10:09:18.814 22804 TRACE nova.api.openstack return 
method(req=request, **action_args)
  2013-11-13 10:09:18.814 22804 TRACE nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/compute/server_metadata.py", line 67, 
in create
  2013-11-13 10:09:18.814 22804 TRACE nova.api.openstack delete=False)
  2013-11-13 10:09:18.814 22804 TRACE nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/compute/server_metadata.py", line 120, 
in _update_instance_metadata
  2013-11-13 10:09:18.814 22804 TRA

[Yahoo-eng-team] [Bug 1258319] Re: test_reboot_server_hard fails sporadically in swift check jobs

2013-12-12 Thread Sean Dague
*** This bug is a duplicate of bug 1224518 ***
https://bugs.launchpad.net/bugs/1224518

** This bug has been marked a duplicate of bug 1224518
   test_reboot_server_hard fails sporadically in swift check jobs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1258319

Title:
  test_reboot_server_hard fails sporadically in swift check jobs

Status in OpenStack Compute (Nova):
  New

Bug description:
  test_reboot_server_hard fails sporadically in swift check jobs

  I believe this has been reported before, but I was not able to find
  it.

  See: http://logs.openstack.org/43/60343/1/gate/gate-tempest-dsvm-
  full/c92d206/console.html

  2013-12-05 21:29:18.174 | 
==
  2013-12-05 21:29:18.183 | FAIL: 
tempest.api.compute.servers.test_server_actions.ServerActionsTestXML.test_reboot_server_hard[gate,smoke]
  2013-12-05 21:29:18.186 | 
tempest.api.compute.servers.test_server_actions.ServerActionsTestXML.test_reboot_server_hard[gate,smoke]
  2013-12-05 21:29:18.200 | 
--
  2013-12-05 21:29:18.206 | _StringException: Empty attachments:
  2013-12-05 21:29:18.206 |   stderr
  2013-12-05 21:29:18.207 |   stdout
  2013-12-05 21:29:18.207 | 
  2013-12-05 21:29:18.207 | pythonlogging:'': {{{

  .
  .
  .

  2013-12-05 21:29:19.174 | Traceback (most recent call last):
  2013-12-05 21:29:19.175 |   File 
"tempest/api/compute/servers/test_server_actions.py", line 83, in 
test_reboot_server_hard
  2013-12-05 21:29:19.175 | 
self.client.wait_for_server_status(self.server_id, 'ACTIVE')
  2013-12-05 21:29:19.175 |   File 
"tempest/services/compute/xml/servers_client.py", line 369, in 
wait_for_server_status
  2013-12-05 21:29:19.175 | extra_timeout=extra_timeout)
  2013-12-05 21:29:19.176 |   File "tempest/common/waiters.py", line 82, in 
wait_for_server_status
  2013-12-05 21:29:19.176 | raise exceptions.TimeoutException(message)
  2013-12-05 21:29:19.176 | TimeoutException: Request timed out
  2013-12-05 21:29:19.177 | Details: Server 
f313af9a-8ec1-4f77-b63f-76d9317d6423 failed to reach ACTIVE status within the 
required time (196 s). Current status: HARD_REBOOT.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1258319/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1224518] Re: test_reboot_server_hard fails sporadically in swift check jobs

2013-12-12 Thread Sean Dague
I don't think this is a tempest bug, this is a state transition bug in
Nova

** Changed in: tempest
   Importance: Undecided => Low

** Changed in: tempest
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1224518

Title:
  test_reboot_server_hard fails sporadically in swift check jobs

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Invalid

Bug description:
  See: http://logs.openstack.org/46/46146/2/check/gate-tempest-devstack-
  vm-postgres-full/b2712f1/console.html

  2013-09-12 04:43:17.625 | 
==
  2013-09-12 04:43:17.649 | FAIL: 
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_reboot_server_hard[gate,smoke]
  2013-09-12 04:43:17.651 | 
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_reboot_server_hard[gate,smoke]
  2013-09-12 04:43:17.652 | 
--
  2013-09-12 04:43:17.652 | _StringException: Empty attachments:
  2013-09-12 04:43:17.652 |   stderr
  2013-09-12 04:43:17.652 |   stdout
  2013-09-12 04:43:17.653 | 
  2013-09-12 04:43:17.653 | pythonlogging:'': {{{
  2013-09-12 04:43:17.653 | 2013-09-12 04:16:55,739 Request: GET 
http://127.0.0.1:8774/v2/83ed6f49279b4292a00b32397d2f52fb/servers/8ad0ad9a-3975-486f-94b4-af1c89b51aaf
  2013-09-12 04:43:17.654 | 2013-09-12 04:16:55,806 Response Status: 200
  2013-09-12 04:43:17.654 | 2013-09-12 04:16:55,806 Nova request id: 
req-cdc6b1fc-bcf2-4e9c-bea1-8bf935993cbd
  2013-09-12 04:43:17.654 | 2013-09-12 04:16:55,807 Request: POST 
http://127.0.0.1:8774/v2/83ed6f49279b4292a00b32397d2f52fb/servers/8ad0ad9a-3975-486f-94b4-af1c89b51aaf/action
  2013-09-12 04:43:17.655 | 2013-09-12 04:16:55,917 Response Status: 202
  2013-09-12 04:43:17.655 | 2013-09-12 04:16:55,917 Nova request id: 
req-3af37dd3-0ddc-4daa-aa6f-6958a5073cc4
  2013-09-12 04:43:17.655 | 2013-09-12 04:16:55,918 Request: GET 
http://127.0.0.1:8774/v2/83ed6f49279b4292a00b32397d2f52fb/servers/8ad0ad9a-3975-486f-94b4-af1c89b51aaf
  2013-09-12 04:43:17.655 | 2013-09-12 04:16:55,986 Response Status: 200
  2013-09-12 04:43:17.656 | 2013-09-12 04:16:55,986 Nova request id: 
req-a7298d3e-167c-4c8f-9506-6064ba811e5b

  .
  .
  .

  2013-09-12 04:43:17.976 | 2013-09-12 04:23:35,773 Request: GET 
http://127.0.0.1:8774/v2/83ed6f49279b4292a00b32397d2f52fb/servers/8ad0ad9a-3975-486f-94b4-af1c89b51aaf
  2013-09-12 04:43:17.976 | 2013-09-12 04:23:35,822 Response Status: 200
  2013-09-12 04:43:17.976 | 2013-09-12 04:23:35,823 Nova request id: 
req-a122aded-b49b-4847-9920-b2b8b09bc0ca
  2013-09-12 04:43:17.976 | }}}
  2013-09-12 04:43:17.977 | 
  2013-09-12 04:43:17.977 | Traceback (most recent call last):
  2013-09-12 04:43:17.978 |   File 
"tempest/api/compute/servers/test_server_actions.py", line 81, in 
test_reboot_server_hard
  2013-09-12 04:43:17.978 | 
self.client.wait_for_server_status(self.server_id, 'ACTIVE')
  2013-09-12 04:43:17.979 |   File 
"tempest/services/compute/json/servers_client.py", line 176, in 
wait_for_server_status
  2013-09-12 04:43:17.979 | raise exceptions.TimeoutException(message)
  2013-09-12 04:43:17.979 | TimeoutException: Request timed out
  2013-09-12 04:43:17.980 | Details: Server 
8ad0ad9a-3975-486f-94b4-af1c89b51aaf failed to reach ACTIVE status within the 
required time (400 s). Current status: HARD_REBOOT.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1224518/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244762] Re: tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_run_stop_terminate_instance fails sporadically

2013-12-12 Thread Sean Dague
** Also affects: nova
   Importance: Undecided
   Status: New

** No longer affects: tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1244762

Title:
  
tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_run_stop_terminate_instance
  fails sporadically

Status in OpenStack Compute (Nova):
  New

Bug description:
  See: http://logs.openstack.org/87/44787/16/check/check-tempest-
  devstack-vm-neutron/d2ede4d/console.html

  2013-10-25 18:06:37.957 | 
==
  2013-10-25 18:06:37.959 | FAIL: 
tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_run_stop_terminate_instance[gate,smoke]
  2013-10-25 18:06:37.959 | 
tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_run_stop_terminate_instance[gate,smoke]
  2013-10-25 18:06:37.959 | 
--
  2013-10-25 18:06:37.959 | _StringException: Empty attachments:
  2013-10-25 18:06:37.959 |   stderr
  2013-10-25 18:06:37.960 |   stdout
  2013-10-25 18:06:37.960 | 
  2013-10-25 18:06:37.960 | pythonlogging:'': {{{
  2013-10-25 18:06:37.960 | 2013-10-25 17:59:08,821 state: pending
  2013-10-25 18:06:37.960 | 2013-10-25 17:59:14,092 State transition "pending" 
==> "error" 5 second
  2013-10-25 18:06:37.961 | }}}
  2013-10-25 18:06:37.961 | 
  2013-10-25 18:06:37.961 | Traceback (most recent call last):
  2013-10-25 18:06:37.961 |   File 
"tempest/thirdparty/boto/test_ec2_instance_run.py", line 150, in 
test_run_stop_terminate_instance
  2013-10-25 18:06:37.961 | self.assertInstanceStateWait(instance, 
"running")
  2013-10-25 18:06:37.961 |   File "tempest/thirdparty/boto/test.py", line 356, 
in assertInstanceStateWait
  2013-10-25 18:06:37.962 | state = self.waitInstanceState(lfunction, 
wait_for)
  2013-10-25 18:06:37.962 |   File "tempest/thirdparty/boto/test.py", line 341, 
in waitInstanceState
  2013-10-25 18:06:37.962 | self.valid_instance_state)
  2013-10-25 18:06:37.962 |   File "tempest/thirdparty/boto/test.py", line 332, 
in state_wait_gone
  2013-10-25 18:06:37.962 | self.assertIn(state, valid_set | self.gone_set)
  2013-10-25 18:06:37.963 |   File 
"/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 328, in 
assertIn
  2013-10-25 18:06:37.963 | self.assertThat(haystack, Contains(needle))
  2013-10-25 18:06:37.963 |   File 
"/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 417, in 
assertThat
  2013-10-25 18:06:37.963 | raise MismatchError(matchee, matcher, mismatch, 
verbose)
  2013-10-25 18:06:37.963 | MismatchError: u'error' not in set(['paused', 
'terminated', 'running', 'stopped', 'pending', '_GONE', 'stopping', 
'shutting-down'])

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1244762/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1252947] Re: tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON fails sporadically

2013-12-12 Thread Sean Dague
** Also affects: nova
   Importance: Undecided
   Status: New

** No longer affects: tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1252947

Title:
  tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON
  fails sporadically

Status in OpenStack Compute (Nova):
  New

Bug description:
  tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON
  fails sporadically.

  See:  http://logs.openstack.org/66/54966/2/check/check-tempest-
  devstack-vm-full/d611ed0/console.html

  2013-11-19 22:24:52.379 | 
==
  2013-11-19 22:24:52.380 | FAIL: setUpClass 
(tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON)
  2013-11-19 22:24:52.380 | setUpClass 
(tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON)
  2013-11-19 22:24:52.380 | 
--
  2013-11-19 22:24:52.380 | _StringException: Traceback (most recent call last):
  2013-11-19 22:24:52.380 |   File 
"tempest/api/compute/servers/test_servers_negative.py", line 46, in setUpClass
  2013-11-19 22:24:52.380 | resp, server = 
cls.create_test_server(wait_until='ACTIVE')
  2013-11-19 22:24:52.381 |   File "tempest/api/compute/base.py", line 118, in 
create_test_server
  2013-11-19 22:24:52.381 | server['id'], kwargs['wait_until'])
  2013-11-19 22:24:52.381 |   File 
"tempest/services/compute/json/servers_client.py", line 160, in 
wait_for_server_status
  2013-11-19 22:24:52.381 | extra_timeout=extra_timeout)
  2013-11-19 22:24:52.381 |   File "tempest/common/waiters.py", line 73, in 
wait_for_server_status
  2013-11-19 22:24:52.381 | raise 
exceptions.BuildErrorException(server_id=server_id)
  2013-11-19 22:24:52.381 | BuildErrorException: Server 
62bfeebd-8878-477f-9eac-a8b21ec5ac26 failed to build and is in ERROR status

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1252947/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260489] [NEW] --debug flag not working in neutron

2013-12-12 Thread Venkata Siva Vijayendra Bhamidipati
Public bug reported:

This is with the neutron master branch, in a single node devstack setup.
The branch is at commit 3b4233873539bad62d202025529678a5b0add412.

If I use the --debug flag in a neutron CLI, for example, port-list, I
don't see any debug output:

cloud@controllernode:/opt/stack/neutron$ neutron --debug port-list
+--+--+---+-+
| id   | name | mac_address   | fixed_ips   
|
+--+--+---+-+
| 6c26cdc1-acc1-439c-bb47-d343085b7b78 |  | fa:16:3e:32:2c:eb | 
{"subnet_id": "37f15352-e816-4a03-b58c-b4d5c1fa8e2a", "ip_address": "10.0.0.2"} 
|
| f09b14b2-3162-4212-9d91-f97b22c95f31 |  | fa:16:3e:99:08:6b | 
{"subnet_id": "d4717b67-fd64-45ed-b22c-dedbd23afff3", "ip_address": 
"172.24.4.226"} |
| f0ba4efd-12ca-4d56-8c7d-e879e4150a63 |  | fa:16:3e:02:41:47 | 
{"subnet_id": "37f15352-e816-4a03-b58c-b4d5c1fa8e2a", "ip_address": "10.0.0.1"} 
|
+--+--+---+-+
cloud@controllernode:/opt/stack/neutron$ 


On the other hand, if I use the --debug flag for nova, for example, nova list, 
I see the curl request and response showing up:


cloud@controllernode:/opt/stack/neutron$ nova --debug list

REQ: curl -i 'http://192.168.52.85:5000/v2.0/tokens' -X POST -H
"Content-Type: application/json" -H "Accept: application/json" -H "User-
Agent: python-novaclient" -d '{"auth": {"tenantName": "admin",
"passwordCredentials": {"username": "admin", "password": "password"}}}'

RESP: [200] CaseInsensitiveDict({'date': 'Thu, 05 Dec 2013 23:41:07 GMT', 
'vary': 'X-Auth-Token', 'content-length': '8255', 'content-type': 
'application/json'})
RESP BODY: {"access": {"token": {"issued_at": "2013-12-05T23:41:07.307915", 
"expires": "2013-12-06T23:41:07Z", "id": 
"MIIOkwYJKoZIhvcNAQcCoIIOhDCCDoACAQExCTAHBgUrDgMCGjCCDOkGCSqGSIb3DQEHAaCCDNoEggzWeyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxMy0xMi0wNVQyMzo0MTowNy4zMDc5MTUiLCAiZXhwaXJlcyI6ICIyMDEzLTEyLTA2VDIzOjQxOjA3WiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogbnVsbCwgImVuYWJsZWQiOiB0cnVlLCAiaWQiOiAiYTdiMzk2MGI5NzkyNGJhYjlhNTVhOWY5ZjY4NGE4NzAiLCAibmFtZSI6ICJhZG1pbiJ9fSwgInNlcnZpY2VDYXRhbG9nIjogW3siZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzE5Mi4xNjguNTIuODU6ODc3NC92Mi9hN2IzOTYwYjk3OTI0YmFiOWE1NWE5ZjlmNjg0YTg3MCIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xOTIuMTY4LjUyLjg1Ojg3NzQvdjIvYTdiMzk2MGI5NzkyNGJhYjlhNTVhOWY5ZjY4NGE4NzAiLCAiaWQiOiAiMDQyMzVjMmE1ODNlNDAwZDg1NTBkYTI0NmNiZDI1YWEiLCAicHVibGljVVJMIjogImh0dHA6Ly8xOTIuMTY4LjUyLjg1Ojg3NzQvdjIvYTdiMzk2MGI5NzkyNGJhYjlhNTVhOWY5ZjY4NGE4NzAifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAi
 
Y29tcHV0ZSIsICJuYW1lIjogIm5vdmEifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTkyLjE2OC41Mi44NTo5Njk2LyIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xOTIuMTY4LjUyLjg1Ojk2OTYvIiwgImlkIjogIjYyNWI1YzM3ZDJlYzQ4ZGRhMTRmZGZmZmMyZjBhMTY0IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTkyLjE2OC41Mi44NTo5Njk2LyJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJuZXR3b3JrIiwgIm5hbWUiOiAibmV1dHJvbiJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xOTIuMTY4LjUyLjg1Ojg3NzYvdjIvYTdiMzk2MGI5NzkyNGJhYjlhNTVhOWY5ZjY4NGE4NzAiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTkyLjE2OC41Mi44NTo4Nzc2L3YyL2E3YjM5NjBiOTc5MjRiYWI5YTU1YTlmOWY2ODRhODcwIiwgImlkIjogIjNmODVjN2ZmZjNjMzRmNWNiMzlmMTZiMzQ2ZmY1Mjc0IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTkyLjE2OC41Mi44NTo4Nzc2L3YyL2E3YjM5NjBiOTc5MjRiYWI5YTU1YTlmOWY2ODRhODcwIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInZvbHVtZXYyIiwgIm5hbWUiOiAiY2luZGVyIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzE5Mi4xNjguNTIuODU6ODc3NC92MyIsICJyZWdpb
 
24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xOTIuMTY4LjUyLjg1Ojg3NzQvdjMiLCAiaWQiOiAiYTM4NjBlZTM3MWEyNDIxNGFlYTBiODk5M2I1YTY0OTciLCAicHVibGljVVJMIjogImh0dHA6Ly8xOTIuMTY4LjUyLjg1Ojg3NzQvdjMifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAiY29tcHV0ZXYzIiwgIm5hbWUiOiAibm92YSJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xOTIuMTY4LjUyLjg1OjMzMzMiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTkyLjE2OC41Mi44NTozMzMzIiwgImlkIjogIjZmZTY2OTMwNjA5MTQwYWVhMTIwMTJjNWViMzViZGQ2IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTkyLjE2OC41Mi44NTozMzMzIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInMzIiwgIm5hbWUiOiAiczMifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTkyLjE2OC41Mi44NTo5MjkyIiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzE5Mi4xNjguNTIuODU6OTI5MiIsICJpZCI6ICIyMjVhMDc2ZmZiOWI0YmQxYTdmODE4N2M0NzY2M2I0NyIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzE5Mi4xNjguNTIuODU6OTI5MiJ9XS

[Yahoo-eng-team] [Bug 1254772] Re: tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML setUpClass times-out on attaching volume

2013-12-12 Thread David Kranz
This shows up in n-cpu:

The "model server went away" showed up 11 times in the last two weeks
with the last one being on Dec. 3. This sample size is too small for me
to close at this time.

2013-11-25 15:24:22.099 21076 ERROR nova.servicegroup.drivers.db [-] model 
server went away
2013-11-25 15:24:32.814 ERROR nova.compute.manager 
[req-ecacaa21-3f07-4b44-9896-8b5bd2238a19 
ServersTestManualDisk-tempest-1962756300-user 
ServersTestManualDisk-tempest-1962756300-tenant] [instance: 
1f872097-8ad8-44f8-ba03-89a14115efe0] Failed to deallocate network for instance.
2013-11-25 15:25:32.855 21076 ERROR root [-] Original exception being dropped: 
['Traceback (most recent call last):\n', '  File 
"/opt/stack/new/nova/nova/compute/manager.py", line 1809, in 
_try_deallocate_network\nself._deallocate_network(context, instance, 
requested_networks)\n', '  File "/opt/stack/new/nova/nova/compute/manager.py", 
line 1491, in _deallocate_network\ncontext, instance, 
requested_networks=requested_networks)\n', '  File 
"/opt/stack/new/nova/nova/network/api.py", line 93, in wrapped\nreturn 
func(self, context, *args, **kwargs)\n', '  File 
"/opt/stack/new/nova/nova/network/api.py", line 318, in 
deallocate_for_instance\n
self.network_rpcapi.deallocate_for_instance(context, **args)\n', '  File 
"/opt/stack/new/nova/nova/network/rpcapi.py", line 199, in 
deallocate_for_instance\nhost=host, 
requested_networks=requested_networks)\n', '  File 
"/opt/stack/new/nova/nova/rpcclient.py", line 85, in call\nreturn 
self._invoke(self.proxy.call, ctxt, method, **
 kwargs)\n', '  File "/opt/stack/new/nova/nova/rpcclient.py", line 63, in 
_invoke\nreturn cast_or_call(ctxt, msg, **self.kwargs)\n', '  File 
"/opt/stack/new/nova/nova/openstack/common/rpc/proxy.py", line 130, in call\n   
 exc.info, real_topic, msg.get(\'method\'))\n', 'Timeout: Timeout while waiting 
on RPC response - topic: "network", RPC method: "deallocate_for_instance" info: 
""\n']
2013-11-25 15:25:38.371 21076 ERROR nova.openstack.common.periodic_task [-] 
Error during ComputeManager.update_available_resource: Timeout while waiting on 
RPC response - topic: "conductor", RPC method: "compute_node_update" info: 
""
2013-11-25 15:26:32.903 21076 ERROR root [-] Original exception being dropped: 
['Traceback (most recent call last):\n', '  File 
"/opt/stack/new/nova/nova/compute/manager.py", line 1919, in _delete_instance\n 
   self._shutdown_instance(context, db_inst, bdms)\n', '  File 
"/opt/stack/new/nova/nova/compute/manager.py", line 1854, in 
_shutdown_instance\nself._try_deallocate_network(context, instance, 
requested_networks)\n', '  File "/opt/stack/new/nova/nova/compute/manager.py", 
line 1814, in _try_deallocate_network\n
self._set_instance_error_state(context, instance[\'uuid\'])\n', '  File 
"/opt/stack/new/nova/nova/compute/manager.py", line 484, in 
_set_instance_error_state\nvm_state=vm_states.ERROR)\n', '  File 
"/opt/stack/new/nova/nova/compute/manager.py", line 473, in _instance_update\n  
  **kwargs)\n', '  File "/opt/stack/new/nova/nova/conductor/api.py", line 389, 
in instance_update\nupdates, \'conductor\')\n', '  File 
"/opt/stack/new/nova/nova/conductor/rpcapi.py", line
  149, in instance_update\nservice=service)\n', '  File 
"/opt/stack/new/nova/nova/rpcclient.py", line 85, in call\nreturn 
self._invoke(self.proxy.call, ctxt, method, **kwargs)\n', '  File 
"/opt/stack/new/nova/nova/rpcclient.py", line 63, in _invoke\nreturn 
cast_or_call(ctxt, msg, **self.kwargs)\n', '  File 
"/opt/stack/new/nova/nova/openstack/common/rpc/proxy.py", line 130, in call\n   
 exc.info, real_topic, msg.get(\'method\'))\n', 'Timeout: Timeout while waiting 
on RPC response - topic: "conductor", RPC method: "instance_update" info: 
""\n']
2013-11-25 15:26:32.933 21076 ERROR nova.servicegroup.drivers.db [-] Recovered 
model server connection!


** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: tempest
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1254772

Title:
  tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML
  setUpClass times-out on attaching volume

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Invalid

Bug description:
  2013-11-25 15:42:45.769 | 
==
  2013-11-25 15:42:45.770 | FAIL: setUpClass 
(tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML)
  2013-11-25 15:42:45.770 | setUpClass 
(tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML)
  2013-11-25 15:42:45.770 | 
--
  2013-11-25 15:42:45.770 | _StringException: Traceback (most recent call last):
  2013-11-25 15:42:45.770 |   File 
"tempest/api/compute/servers/test_server

[Yahoo-eng-team] [Bug 1257032] Re: nova makes calls to neutron with out considering URI size limit

2013-12-12 Thread Venkata Siva Vijayendra Bhamidipati
*** This bug is a duplicate of bug 1228384 ***
https://bugs.launchpad.net/bugs/1228384

Will close this bug as a dup of
https://bugs.launchpad.net/nova/+bug/1228384 . As part of that fix, Phil
has implemented chunking of server ids when querying neutron for ports
of VMs. The above stack trace is for a quantum deployment, and Phil's
patch will be ported over to that deployment.

** This bug has been marked a duplicate of bug 1228384
   Security Group extension reads all Neutron ports for anything other that a 
single server

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1257032

Title:
  nova makes calls to neutron with out considering URI size limit

Status in OpenStack Compute (Nova):
  New

Bug description:
  Nova when requesting security group instance bindings for servers
  makes a call to Neutron. When there are many servers, the URI size
  grows beyond 8k with neutron throwing up 414 error message.

  
  We could easily hit this bug in our deployment as we have several VMs running.

  There is a similar bug while making net-list. It needed to make subnet
  list internally and that would result in 414 if there are several
  subnets that can make URI size too long. But subnet-list was internal
  call and fix was appropriate in the neutron client.

  Here the bug is while fetching sg instance bindings which is primary
  call. Hence I feel the bug must be fixed in neutron API consumer which
  is nova.

  Also, there must be a general framework for all APIs to not to exceed
  URI size limit or fix all calls  with URIs which can extend beyond 8k
  size limit.

  Stacktrace for reference
  2013-11-27 13:06:01.696 ERROR nova.api.openstack 
[req-020f17cb-ee43-4cd2-a270-767936e6546b 6abe780581924062bdb648375abcb378 
b9bbb06d41a942248e8d7070e17ed89d] Caught error: 414-{'message': ''}
  2013-11-27 13:06:01.696 30107 TRACE nova.api.openstack Traceback (most recent 
call last):
  2013-11-27 13:06:01.696 30107 TRACE nova.api.openstack   File 
"/usr/lib/python2.6/site-packages/nova/api/openstack/__init__.py", line 81, in 
__call__
  2013-11-27 13:06:01.696 30107 TRACE nova.api.openstack return 
req.get_response(self.application)
  2013-11-27 13:06:01.696 30107 TRACE nova.api.openstack   File 
"/usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/request.py", line 
1053, in get_response
  2013-11-27 13:06:01.696 30107 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2013-11-27 13:06:01.696 30107 TRACE nova.api.openstack   File 
"/usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/request.py", line 
1022, in call_application
  2013-11-27 13:06:01.696 30107 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2013-11-27 13:06:01.696 30107 TRACE nova.api.openstack   File 
"/usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/dec.py", line 
159, in __call__
  2013-11-27 13:06:01.696 30107 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-11-27 13:06:01.696 30107 TRACE nova.api.openstack   File 
"/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py", 
line 450, in __call__
  2013-11-27 13:06:01.696 30107 TRACE nova.api.openstack return 
self.app(env, start_response)
  2013-11-27 13:06:01.696 30107 TRACE nova.api.openstack   File 
"/usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/dec.py", line 
159, in __call__
  2013-11-27 13:06:01.696 30107 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-11-27 13:06:01.696 30107 TRACE nova.api.openstack   File 
"/usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/dec.py", line 
159, in __call__
  2013-11-27 13:06:01.696 30107 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-11-27 13:06:01.696 30107 TRACE nova.api.openstack   File 
"/usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/dec.py", line 
159, in __call__
  2013-11-27 13:06:01.696 30107 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-11-27 13:06:01.696 30107 TRACE nova.api.openstack   File 
"/usr/lib/python2.6/site-packages/Routes-1.12.3-py2.6.egg/routes/middleware.py",
 line 131, in __call__
  2013-11-27 13:06:01.696 30107 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2013-11-27 13:06:01.696 30107 TRACE nova.api.openstack   File 
"/usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/dec.py", line 
159, in __call__
  2013-11-27 13:06:01.696 30107 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-11-27 13:06:01.696 30107 TRACE nova.api.openstack   File 
"/usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/dec.py", line 
147, in __call__
  2013-11-27 13:06:01.696 30107 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2013-11-27 13:06:01.696 30107 TRACE nova.api.openstack

[Yahoo-eng-team] [Bug 1255627] Re: images.test_list_image_filters.ListImageFiltersTest fails with timeout

2013-12-12 Thread David Kranz
This non-white-listed error showed up in n-cpu:

2013-11-27 00:53:57.756 ERROR nova.virt.libvirt.driver [req-
298cf8f1-3907-4494-8b6e-61e9b88dfded ListImageFiltersTestXML-
tempest-656023876-user ListImageFiltersTestXML-tempest-656023876-tenant]
An error occurred while enabling hairpin mode on domain with xml:


According to logstash this happened 9 times in the last two weeks.

** Changed in: nova
   Status: New => Confirmed

** Changed in: tempest
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1255627

Title:
  images.test_list_image_filters.ListImageFiltersTest fails with timeout

Status in OpenStack Compute (Nova):
  Confirmed
Status in Tempest:
  Invalid

Bug description:
  Spurious failure in this test:

  http://logs.openstack.org/49/55749/8/check/check-tempest-devstack-vm-
  full/9bc94d5/console.html

  2013-11-27 01:10:35.802 | 
==
  2013-11-27 01:10:35.802 | FAIL: setUpClass 
(tempest.api.compute.images.test_list_image_filters.ListImageFiltersTestXML)
  2013-11-27 01:10:35.803 | setUpClass 
(tempest.api.compute.images.test_list_image_filters.ListImageFiltersTestXML)
  2013-11-27 01:10:35.803 | 
--
  2013-11-27 01:10:35.803 | _StringException: Traceback (most recent call last):
  2013-11-27 01:10:35.804 |   File 
"tempest/api/compute/images/test_list_image_filters.py", line 50, in setUpClass
  2013-11-27 01:10:35.807 | cls.client.wait_for_image_status(cls.image1_id, 
'ACTIVE')
  2013-11-27 01:10:35.809 |   File 
"tempest/services/compute/xml/images_client.py", line 153, in 
wait_for_image_status
  2013-11-27 01:10:35.809 | raise exceptions.TimeoutException
  2013-11-27 01:10:35.809 | TimeoutException: Request timed out

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1255627/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1248757] Re: test_snapshot_pattern fails with paramiko ssh EOFError

2013-12-12 Thread Matt Riedemann
Removing tempest and adding nova, glance and neutron given what this
test impacts:

FAIL:
tempest.scenario.test_snapshot_pattern.TestSnapshotPattern.test_snapshot_pattern[compute,image,network]

** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: glance
   Importance: Undecided
   Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

** No longer affects: tempest

** Tags added: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1248757

Title:
  test_snapshot_pattern fails with paramiko ssh EOFError

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  I haven't seen this one reported yet (or seen it yet):

  http://logs.openstack.org/55/55455/1/check/check-tempest-devstack-vm-
  neutron/28d1ed7/console.html

  http://paste.openstack.org/show/50561/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1248757/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1218391] Re: tempest.api.compute.images.test_images_oneserver.ImagesOneServerTestXML.test_delete_image_that_is_not_yet_active spurious failure

2013-12-12 Thread Sean Dague
there is only 1 hit in the last 2 weeks on this test, I actually think
it's closed

** Changed in: nova
   Status: Confirmed => Fix Released

** No longer affects: nova

** Changed in: tempest
   Status: Confirmed => Fix Released

** No longer affects: tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1218391

Title:
  
tempest.api.compute.images.test_images_oneserver.ImagesOneServerTestXML.test_delete_image_that_is_not_yet_active
  spurious failure

Status in OpenStack Object Storage (Swift):
  Confirmed

Bug description:
  Looks like this is an intermittent failure:

  ft45.7: 
tempest.api.compute.images.test_images_oneserver.ImagesOneServerTestXML.test_delete_image_that_is_not_yet_active[gate,negative]_StringException:
 Empty attachments:
stderr
stdout

  Traceback (most recent call last):
File "tempest/api/compute/images/test_images_oneserver.py", line 161, in 
test_delete_image_that_is_not_yet_active
  resp, body = self.client.create_image(self.server['id'], snapshot_name)
File "tempest/services/compute/xml/images_client.py", line 105, in 
create_image
  str(Document(post_body)), self.headers)
File "tempest/common/rest_client.py", line 260, in post
  return self.request('POST', url, headers, body)
File "tempest/common/rest_client.py", line 388, in request
  resp, resp_body)
File "tempest/common/rest_client.py", line 443, in _error_checker
  raise exceptions.Duplicate(resp_body)
  Duplicate: An object with that identifier already exists

  Details: {'message': "Cannot 'createImage' while instance is in
  task_state image_uploading", 'code': '409'}

  http://logs.openstack.org/50/41350/3/gate/gate-nova-
  python26/80bbbf0/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/swift/+bug/1218391/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260440] Re: nova-compute host is added to scheduling pool before Neutron can bind network ports on said host

2013-12-12 Thread Clint Byrum
This breaks deployment of new clouds in TripleO sometimes, and will
likely break scaling too. Hence the Critical status.

** Also affects: neutron
   Importance: Undecided
   Status: New

** Also affects: tripleo
   Importance: Undecided
   Status: New

** Changed in: tripleo
   Status: New => Triaged

** Changed in: tripleo
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1260440

Title:
  nova-compute host is added to scheduling pool before Neutron can bind
  network ports on said host

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Confirmed
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  This is a race condition.

  Given a cloud with 0 compute nodes available, on a compute node:
  * Start up neutron-openvswitch-agent
  * Start up nova-compute
  * nova boot an instance

  Scenario 1:
  * neutron-openvswitch-agent registers with Neutron before nova tries to boot 
instance
  * port is bound to agent
  * instance boots with correct networking

  Scenario 2:
  * nova schedules instance to host before neutron-openvswitch-agent is 
registered with Neutron
  * nova instance fails with vif_type=binding_failed
  * instance is in ERROR state

  I would expect that Nova would not try to schedule instances on
  compute hosts that are not ready.

  Please also see this mailing list thread for more info:

  http://lists.openstack.org/pipermail/openstack-
  dev/2013-December/022084.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1260440/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1193113] Re: DevicePathInUse exception in devstack-vm-quantum

2013-12-12 Thread Sean Dague
** No longer affects: tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1193113

Title:
  DevicePathInUse exception in devstack-vm-quantum

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  I just got this during verification of one of my changes.  I don't
  think it's related to the change
  (https://review.openstack.org/#/c/33478/) so I'm reporting it here
  before I reverify.

  Full log: http://logs.openstack.org/33478/1/gate/gate-tempest-
  devstack-vm-quantum/32609/logs/screen-n-cpu.txt.gz

  Also, this was for stable/grizzly.  I'm not sure how to specify that
  in LP.

  2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/new/nova/nova/openstack/common/rpc/amqp.py", line 430, in 
_process_data
  2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp rval = 
self.proxy.dispatch(ctxt, version, method, **args)
  2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/new/nova/nova/openstack/common/rpc/dispatcher.py", line 133, in 
dispatch
  2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp return 
getattr(proxyobj, method)(ctxt, **kwargs)
  2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/new/nova/nova/exception.py", line 117, in wrapped
  2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp 
temp_level, payload)
  2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/contextlib.py", line 24, in __exit__
  2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp 
self.gen.next()
  2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/new/nova/nova/exception.py", line 94, in wrapped
  2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
  2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 209, in decorated_function
  2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp pass
  2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/contextlib.py", line 24, in __exit__
  2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp 
self.gen.next()
  2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 195, in decorated_function
  2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 237, in decorated_function
  2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
  2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/contextlib.py", line 24, in __exit__
  2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp 
self.gen.next()
  2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 224, in decorated_function
  2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2854, in 
reserve_block_device_name
  2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp return 
do_reserve()
  2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/new/nova/nova/openstack/common/lockutils.py", line 242, in inner
  2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp retval 
= f(*args, **kwargs)
  2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2843, in do_reserve
  2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp 
context, instance, bdms, device)
  2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/new/nova/nova/compute/utils.py", line 165, in 
get_device_name_for_instance
  2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp raise 
exception.DevicePathInUse(path=device)
  2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp 
DevicePathInUse: The supplied device path (/dev/vdb) is in use.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1193113/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.n

[Yahoo-eng-team] [Bug 1218190] Re: Use assertEqual instead of assertEquals in unitttest

2013-12-12 Thread Jeff Peeler
** Also affects: heat
   Importance: Undecided
   Status: New

** Changed in: heat
Milestone: None => icehouse-2

** Changed in: heat
 Assignee: (unassigned) => Jeff Peeler (jpeeler-z)

** Changed in: heat
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1218190

Title:
  Use assertEqual instead of assertEquals in unitttest

Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Identity (Keystone):
  Fix Released
Status in Python client library for Keystone:
  Fix Committed
Status in Python client library for Neutron:
  Fix Committed

Bug description:
  I noticed that [keystone, python-keystoneclient, python-neutronclient]
  configure tox.ini with py33 test, however, assertEquals is deprecated
  in py3 but ok with py2, so i think it is better to change all of
  assertEquals to assertEqual

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1218190/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1256043] Re: Need to add Development environment files to ignore list

2013-12-12 Thread Sean Dague
** No longer affects: tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1256043

Title:
  Need to add Development environment files to ignore list

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in OpenStack Dashboard (Horizon):
  Won't Fix
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in Python client library for Ceilometer:
  In Progress
Status in Python client library for Cinder:
  In Progress
Status in Python client library for Glance:
  In Progress
Status in Python client library for heat:
  Won't Fix
Status in Python client library for Keystone:
  Fix Committed
Status in Python client library for Neutron:
  In Progress
Status in Python client library for Nova:
  In Progress
Status in Python client library for Swift:
  Won't Fix
Status in OpenStack Object Storage (Swift):
  Won't Fix

Bug description:
  Following files generated by Eclipse development environment should be
  in ignore list to avoid their inclusion during a git push.

  .project
  .pydevproject

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1256043/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258379] Re: vpnservice's router must have gateway interface set

2013-12-12 Thread Sean Dague
** No longer affects: tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1258379

Title:
  vpnservice's router must have gateway interface set

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  at line
  
https://github.com/openstack/neutron/blob/master/neutron/services/vpn/service_drivers/ipsec.py#L172

  it is obvious the router must have gateway interface set  then it can
  be used as vpnservce router.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1258379/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1242898] Re: tearDownClass (tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML): tearDownClass does not call the super's tearDownClass

2013-12-12 Thread Mauro Sergio Martins Rodrigues
if you look into http://logs.openstack.org/48/59948/4/check/check-
tempest-dsvm-neutron-pg-isolated/dab4997/logs/screen-q-lbaas.txt.gz will
see some tracebacks, it's probably an error in neutron.

Note that I get it from http://logs.openstack.org/48/59948/4/check
/check-tempest-dsvm-neutron-pg-isolated/dab4997/ not the original trace
which is already deleted now.

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: tempest
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1242898

Title:
  tearDownClass
  (tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML):
  tearDownClass does not call the super's tearDownClass

Status in OpenStack Neutron (virtual network service):
  New
Status in Tempest:
  Invalid

Bug description:
  From http://logs.openstack.org/32/47432/16/check/check-tempest-
  devstack-vm-neutron-pg-isolated/c2a0dd3/console.html

  ...

  tearDownClass
  (tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML)
  ... FAIL

  ...

  2013-10-21 19:17:53.068 | Error in atexit._run_exitfuncs:
  2013-10-21 19:17:53.068 | Traceback (most recent call last):
  2013-10-21 19:17:53.068 |   File "/usr/lib/python2.7/atexit.py", line 24, in 
_run_exitfuncs
  2013-10-21 19:17:53.069 | func(*targs, **kargs)
  2013-10-21 19:17:53.069 |   File "tempest/test.py", line 167, in 
validate_tearDownClass
  2013-10-21 19:17:53.069 | + str(at_exit_set))
  2013-10-21 19:17:53.069 | RuntimeError: tearDownClass does not calls the 
super's tearDownClass in these classes: set([])
  2013-10-21 19:17:53.070 | Error in sys.exitfunc:
  2013-10-21 19:17:53.221 | 
  2013-10-21 19:17:53.221 | process-returncode
  2013-10-21 19:17:53.221 | process-returncode ... FAIL
  2013-10-21 19:17:53.614 | 
  2013-10-21 19:17:53.614 | 
==
  2013-10-21 19:17:53.614 | FAIL: tearDownClass 
(tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML)
  2013-10-21 19:17:53.614 | tearDownClass 
(tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML)
  2013-10-21 19:17:53.614 | 
--
  2013-10-21 19:17:53.614 | _StringException: Traceback (most recent call last):
  2013-10-21 19:17:53.615 |   File 
"tempest/api/compute/servers/test_server_rescue.py", line 95, in tearDownClass
  2013-10-21 19:17:53.615 | super(ServerRescueTestJSON, cls).tearDownClass()
  2013-10-21 19:17:53.615 |   File "tempest/api/compute/base.py", line 132, in 
tearDownClass
  2013-10-21 19:17:53.615 | cls.isolated_creds.clear_isolated_creds()
  2013-10-21 19:17:53.615 |   File "tempest/common/isolated_creds.py", line 
453, in clear_isolated_creds
  2013-10-21 19:17:53.615 | self._clear_isolated_net_resources()
  2013-10-21 19:17:53.615 |   File "tempest/common/isolated_creds.py", line 
445, in _clear_isolated_net_resources
  2013-10-21 19:17:53.616 | self._clear_isolated_network(network['id'], 
network['name'])
  2013-10-21 19:17:53.616 |   File "tempest/common/isolated_creds.py", line 
399, in _clear_isolated_network
  2013-10-21 19:17:53.616 | net_client.delete_network(network_id)
  2013-10-21 19:17:53.616 |   File 
"tempest/services/network/json/network_client.py", line 76, in delete_network
  2013-10-21 19:17:53.616 | resp, body = self.delete(uri, self.headers)
  2013-10-21 19:17:53.616 |   File "tempest/common/rest_client.py", line 308, 
in delete
  2013-10-21 19:17:53.617 | return self.request('DELETE', url, headers)
  2013-10-21 19:17:53.617 |   File "tempest/common/rest_client.py", line 436, 
in request
  2013-10-21 19:17:53.617 | resp, resp_body)
  2013-10-21 19:17:53.617 |   File "tempest/common/rest_client.py", line 522, 
in _error_checker
  2013-10-21 19:17:53.617 | raise exceptions.ServerFault(message)
  2013-10-21 19:17:53.617 | ServerFault: Got server fault
  2013-10-21 19:17:53.617 | Details: {"NeutronError": "Request Failed: internal 
server error while processing your request."}
  2013-10-21 19:17:53.618 | 
  2013-10-21 19:17:53.618 | 
  2013-10-21 19:17:53.618 | 
==
  2013-10-21 19:17:53.618 | FAIL: process-returncode
  2013-10-21 19:17:53.619 | process-returncode
  2013-10-21 19:17:53.619 | 
--
  2013-10-21 19:17:53.619 | _StringException: Binary content:
  2013-10-21 19:17:53.619 |   traceback (test/plain; charset="utf8")
  2013-10-21 19:17:53.619 | 
  2013-10-21 19:17:53.619 | 
  2013-10-21 19:17:53.620 | 
--
  2013-10-21 19:17:53.620 | Ran 237 tests in 914.828s
  2013-10-21 19:17:53.637 | 
  2013-10-21 19:17:53.638 | FAILED (failures=2, skipped=8)

To manage notifications

[Yahoo-eng-team] [Bug 1181567] Re: tempest: test_create_server / wait_for_server_status timeout

2013-12-12 Thread Sean Dague
Not actually a tempest bug, this is a race in Nova

** Changed in: tempest
   Status: Incomplete => Invalid

** Summary changed:

- tempest: test_create_server / wait_for_server_status timeout
+ Nova compute guest still stuck in BUILD state after 400s

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1181567

Title:
  Nova compute guest still stuck in BUILD state after 400s

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Invalid

Bug description:
  Failure occurred on https://review.openstack.org/#/c/29591/2

  http://logs.openstack.org/29591/2/gate/gate-tempest-devstack-vm-
  quantum/23189/console.html.gz

  2013-05-17 22:54:02.079 | 
==
  2013-05-17 22:54:02.079 | ERROR: test suite for 
  2013-05-17 22:54:02.079 | 
--
  2013-05-17 22:54:02.079 | Traceback (most recent call last):
  2013-05-17 22:54:02.079 |   File 
"/usr/lib/python2.7/dist-packages/nose/suite.py", line 208, in run
  2013-05-17 22:54:02.080 | self.setUp()
  2013-05-17 22:54:02.080 |   File 
"/usr/lib/python2.7/dist-packages/nose/suite.py", line 291, in setUp
  2013-05-17 22:54:02.080 | self.setupContext(ancestor)
  2013-05-17 22:54:02.080 |   File 
"/usr/lib/python2.7/dist-packages/nose/suite.py", line 314, in setupContext
  2013-05-17 22:54:02.080 | try_run(context, names)
  2013-05-17 22:54:02.080 |   File 
"/usr/lib/python2.7/dist-packages/nose/util.py", line 478, in try_run
  2013-05-17 22:54:02.080 | return func()
  2013-05-17 22:54:02.080 |   File 
"/opt/stack/new/tempest/tempest/tests/compute/servers/test_create_server.py", 
line 57, in setUpClass
  2013-05-17 22:54:02.080 | 
cls.client.wait_for_server_status(cls.server_initial['id'], 'ACTIVE')
  2013-05-17 22:54:02.081 |   File 
"/opt/stack/new/tempest/tempest/services/compute/xml/servers_client.py", line 
311, in wait_for_server_status
  2013-05-17 22:54:02.081 | raise exceptions.TimeoutException(message)
  2013-05-17 22:54:02.081 | TimeoutException: Request timed out
  2013-05-17 22:54:02.081 | Details: Request timed out
  2013-05-17 22:54:02.081 | Details: Server 
87c1dc14-44b1-406f-a7a0-c41876dc9111 failed to reach ACTIVE status within the 
required time (400 s). Current status: BUILD.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1181567/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260454] [NEW] Add cinder 'extend' volume functionality

2013-12-12 Thread Jesse Pretorius
Public bug reported:

Cinder now has the ability to 'extend' (ie grow/expand/resize up) a
volume. This functionality should be exposed through Horizon.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1260454

Title:
  Add cinder 'extend' volume functionality

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Cinder now has the ability to 'extend' (ie grow/expand/resize up) a
  volume. This functionality should be exposed through Horizon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1260454/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1218279] Re: setUpClass (tempest.api.compute.images.test_images_oneserver.ImagesOneServerTestJSON) Failed at server creation

2013-12-12 Thread Sean Dague
I actually think this was the enable/disable server race. I'm going to
close this for now. Reopen if an issue in the future

** Changed in: nova
   Status: New => Fix Released

** Changed in: tempest
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1218279

Title:
  setUpClass
  (tempest.api.compute.images.test_images_oneserver.ImagesOneServerTestJSON)
  Failed at server creation

Status in OpenStack Compute (Nova):
  Fix Released
Status in Tempest:
  Fix Released

Bug description:
  http://logs.openstack.org/78/44178/1/gate/gate-tempest-devstack-vm-
  postgres-full/042f00e/

  2013-08-29 08:51:47.396 | FAIL: setUpClass 
(tempest.api.compute.images.test_images_oneserver.ImagesOneServerTestJSON)
  2013-08-29 08:51:47.397 | setUpClass 
(tempest.api.compute.images.test_images_oneserver.ImagesOneServerTestJSON)
  2013-08-29 08:51:47.397 | 
--
  2013-08-29 08:51:47.397 | _StringException: Traceback (most recent call last):
  2013-08-29 08:51:47.397 |   File 
"tempest/api/compute/images/test_images_oneserver.py", line 50, in setUpClass
  2013-08-29 08:51:47.397 | cls.tearDownClass()
  2013-08-29 08:51:47.397 |   File "tempest/api/compute/base.py", line 114, in 
tearDownClass
  2013-08-29 08:51:47.398 | super(BaseComputeTest, cls).tearDownClass()
  2013-08-29 08:51:47.398 |   File "tempest/test.py", line 144, in tearDownClass
  2013-08-29 08:51:47.398 | at_exit_set.remove(cls)
  2013-08-29 08:51:47.398 | KeyError: 

  The server creation was the actually failed.
  The setUpClass attempts to call the tearDownClass on error and throws a 
different exception,
  the correct exception throwing is tempest side issue, but the root cause 
probably not.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1218279/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260440] [NEW] nova-compute host is added to scheduling pool before Neutron can bind network ports on said host

2013-12-12 Thread Clint Byrum
Public bug reported:

This is a race condition.

Given a cloud with 0 compute nodes available, on a compute node:
* Start up neutron-openvswitch-agent
* Start up nova-compute
* nova boot an instance

Scenario 1:
* neutron-openvswitch-agent registers with Neutron before nova tries to boot 
instance
* port is bound to agent
* instance boots with correct networking

Scenario 2:
* nova schedules instance to host before neutron-openvswitch-agent is 
registered with Neutron
* nova instance fails with vif_type=binding_failed
* instance is in ERROR state

I would expect that Nova would not try to schedule instances on compute
hosts that are not ready.

Please also see this mailing list thread for more info:

http://lists.openstack.org/pipermail/openstack-
dev/2013-December/022084.html

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260440

Title:
  nova-compute host is added to scheduling pool before Neutron can bind
  network ports on said host

Status in OpenStack Compute (Nova):
  New

Bug description:
  This is a race condition.

  Given a cloud with 0 compute nodes available, on a compute node:
  * Start up neutron-openvswitch-agent
  * Start up nova-compute
  * nova boot an instance

  Scenario 1:
  * neutron-openvswitch-agent registers with Neutron before nova tries to boot 
instance
  * port is bound to agent
  * instance boots with correct networking

  Scenario 2:
  * nova schedules instance to host before neutron-openvswitch-agent is 
registered with Neutron
  * nova instance fails with vif_type=binding_failed
  * instance is in ERROR state

  I would expect that Nova would not try to schedule instances on
  compute hosts that are not ready.

  Please also see this mailing list thread for more info:

  http://lists.openstack.org/pipermail/openstack-
  dev/2013-December/022084.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260440/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260438] [NEW] Display N1K network profile information in network

2013-12-12 Thread Abishek Subramanian
Public bug reported:

When an N1K profile is associated with a network, currently the N1K
profile information is not displayed in the network detail.

** Affects: horizon
 Importance: Undecided
 Assignee: Abishek Subramanian (absubram)
 Status: In Progress

** Changed in: horizon
   Status: New => In Progress

** Changed in: horizon
 Assignee: (unassigned) => Abishek Subramanian (absubram)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1260438

Title:
  Display N1K network profile information in network

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When an N1K profile is associated with a network, currently the N1K
  profile information is not displayed in the network detail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1260438/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260439] [NEW] Error message when creating a user with the default member role

2013-12-12 Thread Guilherme Birk
Public bug reported:

When you create a new user with the same role, from the roles list, as
the default member role set in keystone.conf, Horizon shows the error
message "Unable to add user to primary project". Even with this message,
the user is created and the member role is granted to the user in the
selected project.

If you look the keystone.log file, you will see the following warning:

" WARNING keystone.common.wsgi [-] Conflict occurred attempting to store
role grant. User  already has role  in tenant
"

Example:
I have the "member_role_name = _member_" entry in my keystone.conf file.
When I create a User called "Test_User" in the project "Test_Project" with the 
role "_member_", the error message is shown.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: error horizon message role

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1260439

Title:
  Error message when creating a user with the default member role

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When you create a new user with the same role, from the roles list, as
  the default member role set in keystone.conf, Horizon shows the error
  message "Unable to add user to primary project". Even with this
  message, the user is created and the member role is granted to the
  user in the selected project.

  If you look the keystone.log file, you will see the following warning:

  " WARNING keystone.common.wsgi [-] Conflict occurred attempting to
  store role grant. User  already has role  in tenant
  "

  Example:
  I have the "member_role_name = _member_" entry in my keystone.conf file.
  When I create a User called "Test_User" in the project "Test_Project" with 
the role "_member_", the error message is shown.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1260439/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260437] [NEW] Project name display when editing N1K profile

2013-12-12 Thread Abishek Subramanian
Public bug reported:

When editing an N1K network profile, the project already associated with
the network profile is not displayed.

** Affects: horizon
 Importance: Undecided
 Assignee: Abishek Subramanian (absubram)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Abishek Subramanian (absubram)

** Changed in: horizon
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1260437

Title:
  Project name display when editing N1K profile

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When editing an N1K network profile, the project already associated
  with the network profile is not displayed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1260437/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260436] [NEW] Multi-nic support with N1K plugin

2013-12-12 Thread Abishek Subramanian
Public bug reported:

When the cisco N1K neutron plugin is being used, an instance cannot be launched 
via Horizon with ability to have multiple nics.
Only the first network is used for all created nics.

This bug addressed that issue.

** Affects: horizon
 Importance: Undecided
 Assignee: Abishek Subramanian (absubram)
 Status: In Progress

** Changed in: horizon
   Status: New => In Progress

** Changed in: horizon
 Assignee: (unassigned) => Abishek Subramanian (absubram)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1260436

Title:
  Multi-nic support with N1K plugin

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When the cisco N1K neutron plugin is being used, an instance cannot be 
launched via Horizon with ability to have multiple nics.
  Only the first network is used for all created nics.

  This bug addressed that issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1260436/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260435] [NEW] Edit N1K network profile

2013-12-12 Thread Abishek Subramanian
Public bug reported:

Fix issue with performing an N1K network profile update operation by ensuring 
only the editable fields are edited.
Also with respect to new additions in the neutron N1K plugin, ensure new fields 
are updated in the update section.

** Affects: horizon
 Importance: Undecided
 Assignee: Abishek Subramanian (absubram)
 Status: In Progress

** Changed in: horizon
   Status: New => In Progress

** Changed in: horizon
 Assignee: (unassigned) => Abishek Subramanian (absubram)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1260435

Title:
  Edit N1K network profile

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Fix issue with performing an N1K network profile update operation by ensuring 
only the editable fields are edited.
  Also with respect to new additions in the neutron N1K plugin, ensure new 
fields are updated in the update section.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1260435/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260432] [NEW] nova-compute can't be setting up during install on trusty

2013-12-12 Thread Ming Lei
Public bug reported:


1, during install:
Setting up nova-compute (1:2014.1~b1-0ubuntu2) ...
start: Job failed to start
invoke-rc.d: initscript nova-compute, action "start" failed.
dpkg: error processing nova-compute (--configure):
 subprocess installed post-installation script returned error exit status 1
Setting up nova-compute-kvm (1:2014.1~b1-0ubuntu2) ...
Errors were encountered while processing:
 nova-compute
E: Sub-process /usr/bin/dpkg returned an error code (1)

2, the system is latest trusty:
ming@arm64:~$ sudo apt-get dist-upgrade
Reading package lists... Done
Building dependency tree   
Reading state information... Done
Calculating upgrade... Done
The following packages were automatically installed and are no longer required:
  dnsmasq-utils iputils-arping libboost-system1.53.0 libboost-thread1.53.0
  libclass-isa-perl libopts25 libswitch-perl ttf-dejavu-core
Use 'apt-get autoremove' to remove them.
The following packages have been kept back:
  checkbox-cli
0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.

3, looks /usr/bin/nova-compute can't be started:
ming@arm64:~$ nova-compute 
2013-12-12 17:57:19.992 13823 ERROR stevedore.extension [-] Could not load 
'file': (WebOb 1.3 (/usr/lib/python2.7/dist-packages), 
Requirement.parse('WebOb>=1.2.3,<1.3'))
2013-12-12 17:57:19.993 13823 ERROR stevedore.extension [-] (WebOb 1.3 
(/usr/lib/python2.7/dist-packages), Requirement.parse('WebOb>=1.2.3,<1.3'))
2013-12-12 17:57:19.993 13823 TRACE stevedore.extension Traceback (most recent 
call last):
2013-12-12 17:57:19.993 13823 TRACE stevedore.extension   File 
"/usr/lib/python2.7/dist-packages/stevedore/extension.py", line 134, in 
_load_plugins
2013-12-12 17:57:19.993 13823 TRACE stevedore.extension invoke_kwds,
2013-12-12 17:57:19.993 13823 TRACE stevedore.extension   File 
"/usr/lib/python2.7/dist-packages/stevedore/extension.py", line 146, in 
_load_one_plugin
2013-12-12 17:57:19.993 13823 TRACE stevedore.extension plugin = ep.load()
2013-12-12 17:57:19.993 13823 TRACE stevedore.extension   File 
"/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2107, in load
2013-12-12 17:57:19.993 13823 TRACE stevedore.extension if require: 
self.require(env, installer)
2013-12-12 17:57:19.993 13823 TRACE stevedore.extension   File 
"/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2120, in require
2013-12-12 17:57:19.993 13823 TRACE stevedore.extension 
working_set.resolve(self.dist.requires(self.extras),env,installer)))
2013-12-12 17:57:19.993 13823 TRACE stevedore.extension   File 
"/usr/lib/python2.7/dist-packages/pkg_resources.py", line 580, in resolve
2013-12-12 17:57:19.993 13823 TRACE stevedore.extension raise 
VersionConflict(dist,req) # XXX put more info here
2013-12-12 17:57:19.993 13823 TRACE stevedore.extension VersionConflict: (WebOb 
1.3 (/usr/lib/python2.7/dist-packages), Requirement.parse('WebOb>=1.2.3,<1.3'))
2013-12-12 17:57:19.993 13823 TRACE stevedore.extension 
2013-12-12 17:57:20.133 13823 ERROR nova.virt.driver [-] Compute driver option 
required, but not specified

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260432

Title:
  nova-compute can't be setting up during install on trusty

Status in OpenStack Compute (Nova):
  New

Bug description:
  
  1, during install:
  Setting up nova-compute (1:2014.1~b1-0ubuntu2) ...
  start: Job failed to start
  invoke-rc.d: initscript nova-compute, action "start" failed.
  dpkg: error processing nova-compute (--configure):
   subprocess installed post-installation script returned error exit status 1
  Setting up nova-compute-kvm (1:2014.1~b1-0ubuntu2) ...
  Errors were encountered while processing:
   nova-compute
  E: Sub-process /usr/bin/dpkg returned an error code (1)

  2, the system is latest trusty:
  ming@arm64:~$ sudo apt-get dist-upgrade
  Reading package lists... Done
  Building dependency tree   
  Reading state information... Done
  Calculating upgrade... Done
  The following packages were automatically installed and are no longer 
required:
dnsmasq-utils iputils-arping libboost-system1.53.0 libboost-thread1.53.0
libclass-isa-perl libopts25 libswitch-perl ttf-dejavu-core
  Use 'apt-get autoremove' to remove them.
  The following packages have been kept back:
checkbox-cli
  0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.

  3, looks /usr/bin/nova-compute can't be started:
  ming@arm64:~$ nova-compute 
  2013-12-12 17:57:19.992 13823 ERROR stevedore.extension [-] Could not load 
'file': (WebOb 1.3 (/usr/lib/python2.7/dist-packages), 
Requirement.parse('WebOb>=1.2.3,<1.3'))
  2013-12-12 17:57:19.993 13823 ERROR stevedore.extension [-] (WebOb 1.3 
(/usr/lib/python2.7/dist-packages), Requirement.parse('WebOb>=1.2.3,<1.3'))
  2013-12-12 17:57:19.993 13823 TRACE stevedore.ext

[Yahoo-eng-team] [Bug 1260423] Re: Email shouldn't be a mandatory attribute

2013-12-12 Thread Julie Pichon
** Also affects: horizon/havana
   Importance: Undecided
   Status: New

** Changed in: horizon/havana
   Importance: Undecided => Medium

** Changed in: horizon/havana
 Assignee: (unassigned) => Julie Pichon (jpichon)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1260423

Title:
  Email shouldn't be a mandatory attribute

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Dashboard (Horizon) havana series:
  New

Bug description:
  When using a LDAP backend, it's possible that a user won't have the
  "email" attribute defined, however it should still be possible to edit
  the other fields.

  Steps to reproduce (in an environment with keystone using a LDAP backend):
  1. Log in as admin
  2. Go to the Users dashboard
  3. Select a user that doesn't have an email defined

  Expected result:
  4. "Edit user" modal opens

  Actual result:
  4. Error 500

  Traceback:
  File "/usr/lib/python2.7/site-packages/django/views/generic/edit.py" in get
154. form = self.get_form(form_class)
  File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/forms/views.py" in 
get_form
82. return form_class(self.request, **self.get_form_kwargs())
  File "/usr/lib/python2.7/site-packages/django/views/generic/edit.py" in 
get_form_kwargs
41. kwargs = {'initial': self.get_initial()}
  File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/admin/users/views.py"
 in get_initial
103. 'email': user.email}
  File "/opt/stack/python-keystoneclient/keystoneclient/base.py" in __getattr__
425. raise AttributeError(k)

  Exception Type: AttributeError at 
/admin/users/e005aa43475b403c8babdff86ea27c37/update/
  Exception Value: email

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1260423/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1211338] Re: "Direct" vs. "direct" in impl_qpid

2013-12-12 Thread Russell Bryant
nova grizzly patch: https://review.openstack.org/#/c/61831/

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => Fix Released

** Also affects: nova/grizzly
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1211338

Title:
  "Direct" vs. "direct" in impl_qpid

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  New
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  impl_qpid.py has {"type": "Direct"} (with a capital D) in one place.
  "direct" (lowercase) in others.  It appears that qpid is case-
  sensitive about exchange types, so the version with the capital D is
  invalid.  This ends up causing qpid to throw an error like:

  >> "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py",
  >> line 567, in _ewait\nself.check_error()\n', '  File
  >> "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py",
  >> line 556, in check_error\nraise self.error\n', 'NotFound:
  >> not-found: Exchange type not implemented: Direct
  >> (qpid/broker/SessionAdapter.cpp:117)(404)\n']

  It should be a one-character fix.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1211338/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260423] [NEW] Email shouldn't be a mandatory attribute

2013-12-12 Thread Julie Pichon
Public bug reported:

When using a LDAP backend, it's possible that a user won't have the
"email" attribute defined, however it should still be possible to edit
the other fields.

Steps to reproduce (in an environment with keystone using a LDAP backend):
1. Log in as admin
2. Go to the Users dashboard
3. Select a user that doesn't have an email defined

Expected result:
4. "Edit user" modal opens

Actual result:
4. Error 500

Traceback:
File "/usr/lib/python2.7/site-packages/django/views/generic/edit.py" in get
  154. form = self.get_form(form_class)
File "/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/forms/views.py" 
in get_form
  82. return form_class(self.request, **self.get_form_kwargs())
File "/usr/lib/python2.7/site-packages/django/views/generic/edit.py" in 
get_form_kwargs
  41. kwargs = {'initial': self.get_initial()}
File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/admin/users/views.py"
 in get_initial
  103. 'email': user.email}
File "/opt/stack/python-keystoneclient/keystoneclient/base.py" in __getattr__
  425. raise AttributeError(k)

Exception Type: AttributeError at 
/admin/users/e005aa43475b403c8babdff86ea27c37/update/
Exception Value: email

** Affects: horizon
 Importance: Medium
 Assignee: Julie Pichon (jpichon)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1260423

Title:
  Email shouldn't be a mandatory attribute

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When using a LDAP backend, it's possible that a user won't have the
  "email" attribute defined, however it should still be possible to edit
  the other fields.

  Steps to reproduce (in an environment with keystone using a LDAP backend):
  1. Log in as admin
  2. Go to the Users dashboard
  3. Select a user that doesn't have an email defined

  Expected result:
  4. "Edit user" modal opens

  Actual result:
  4. Error 500

  Traceback:
  File "/usr/lib/python2.7/site-packages/django/views/generic/edit.py" in get
154. form = self.get_form(form_class)
  File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/forms/views.py" in 
get_form
82. return form_class(self.request, **self.get_form_kwargs())
  File "/usr/lib/python2.7/site-packages/django/views/generic/edit.py" in 
get_form_kwargs
41. kwargs = {'initial': self.get_initial()}
  File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/admin/users/views.py"
 in get_initial
103. 'email': user.email}
  File "/opt/stack/python-keystoneclient/keystoneclient/base.py" in __getattr__
425. raise AttributeError(k)

  Exception Type: AttributeError at 
/admin/users/e005aa43475b403c8babdff86ea27c37/update/
  Exception Value: email

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1260423/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1242916] Re: metadata server update_all expects body but doesn't get it passed to it

2013-12-12 Thread Sean Dague
** No longer affects: tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1242916

Title:
  metadata server update_all expects body but doesn't get it passed to
  it

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  This recently started showing up in n-api. Seems like mishandling of invalid 
client args from a negative test in tempest.
  Example: 
http://logs.openstack.org/03/52803/3/check/check-tempest-devstack-vm-postgres-full/354d7a3/logs/screen-n-api.txt.gz

  2013-10-21 20:04:51.724 20923 DEBUG routes.middleware [-] Matched PUT 
/6fa344aaf3034c4992bc30b3c06ad531/servers/a329912b-7874-4636-a08b-e40362e04ab2/metadata
 __call__ /usr/lib/python2.7/dist-packages/routes/middleware.py:100
  2013-10-21 20:04:51.724 20923 DEBUG routes.middleware [-] Route path: 
'/{project_id}/servers/{server_id}/metadata', defaults: {'action': 
u'update_all', 'controller': } __call__ /usr/lib/python2.7/dist-packages/routes/middleware.py:102
  2013-10-21 20:04:51.724 20923 DEBUG routes.middleware [-] Match dict: 
{'action': u'update_all', 'server_id': u'a329912b-7874-4636-a08b-e40362e04ab2', 
'project_id': u'6fa344aaf3034c4992bc30b3c06ad531', 'controller': 
} __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:103
  2013-10-21 20:04:51.724 DEBUG nova.api.openstack.wsgi 
[req-23bdfb52-dae5-42f7-a5b7-17c319ed67ff 
ServerMetadataTestJSON-tempest-1354548727-user 
ServerMetadataTestJSON-tempest-1354548727-tenant] Empty body provided in 
request get_body /opt/stack/new/nova/nova/api/openstack/wsgi.py:839
  2013-10-21 20:04:51.724 DEBUG nova.api.openstack.wsgi 
[req-23bdfb52-dae5-42f7-a5b7-17c319ed67ff 
ServerMetadataTestJSON-tempest-1354548727-user 
ServerMetadataTestJSON-tempest-1354548727-tenant] Calling method > _process_stack 
/opt/stack/new/nova/nova/api/openstack/wsgi.py:962
  2013-10-21 20:04:51.725 ERROR nova.api.openstack.wsgi 
[req-23bdfb52-dae5-42f7-a5b7-17c319ed67ff 
ServerMetadataTestJSON-tempest-1354548727-user 
ServerMetadataTestJSON-tempest-1354548727-tenant] Exception handling resource: 
update_all() takes exactly 4 arguments (3 given)
  2013-10-21 20:04:51.725 20923 TRACE nova.api.openstack.wsgi Traceback (most 
recent call last):
  2013-10-21 20:04:51.725 20923 TRACE nova.api.openstack.wsgi   File 
"/opt/stack/new/nova/nova/api/openstack/wsgi.py", line 997, in _process_stack
  2013-10-21 20:04:51.725 20923 TRACE nova.api.openstack.wsgi action_result 
= self.dispatch(meth, request, action_args)
  2013-10-21 20:04:51.725 20923 TRACE nova.api.openstack.wsgi   File 
"/opt/stack/new/nova/nova/api/openstack/wsgi.py", line 1078, in dispatch
  2013-10-21 20:04:51.725 20923 TRACE nova.api.openstack.wsgi return 
method(req=request, **action_args)
  2013-10-21 20:04:51.725 20923 TRACE nova.api.openstack.wsgi TypeError: 
update_all() takes exactly 4 arguments (3 given)
  2013-10-21 20:04:51.725 20923 TRACE nova.api.openstack.wsgi 
  2013-10-21 20:04:51.726 DEBUG nova.api.openstack.wsgi 
[req-23bdfb52-dae5-42f7-a5b7-17c319ed67ff 
ServerMetadataTestJSON-tempest-1354548727-user 
ServerMetadataTestJSON-tempest-1354548727-tenant] Returning 400 to user: The 
server could not comply with the request since it is either malformed or 
otherwise incorrect. __call__ 
/opt/stack/new/nova/nova/api/openstack/wsgi.py:1224

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1242916/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260406] [NEW] allow disable wsgi keepalive

2013-12-12 Thread Edward Hope-Morley
Public bug reported:

The wsgi server used in most if not  all openstack services currently
has keepalive=True by default thus keeping connections open after each
request. This can cause problems when using load balancers in front of
these services e.g. connections for requests that take a long time can
get closed in the load balancers once a timeout has expired. This can
then cause issues if a client performs a request using the same source
port as a previous request that is not closed in the LB but still open
in the server due to TCP packet sequecing in the LB and the new client
not expecting the connection to already be open. So, it would be useful
to be able to disable wsgi keepalive.

** Affects: keystone
 Importance: Undecided
 Assignee: Edward Hope-Morley (hopem)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) => Edward Hope-Morley (hopem)

** Changed in: keystone
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1260406

Title:
  allow disable wsgi keepalive

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  The wsgi server used in most if not  all openstack services currently
  has keepalive=True by default thus keeping connections open after each
  request. This can cause problems when using load balancers in front of
  these services e.g. connections for requests that take a long time can
  get closed in the load balancers once a timeout has expired. This can
  then cause issues if a client performs a request using the same source
  port as a previous request that is not closed in the LB but still open
  in the server due to TCP packet sequecing in the LB and the new client
  not expecting the connection to already be open. So, it would be
  useful to be able to disable wsgi keepalive.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1260406/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240728] Re: tempest.api.compute.servers.test_server_rescue.ServerRescueTestJSON.test_rescued_vm_attach_volume is nondeterministic

2013-12-12 Thread Sean Dague
** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => High

** Changed in: cinder
   Importance: Undecided => High

** Changed in: cinder
   Status: Invalid => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1240728

Title:
  
tempest.api.compute.servers.test_server_rescue.ServerRescueTestJSON.test_rescued_vm_attach_volume
  is nondeterministic

Status in Cinder:
  Confirmed
Status in OpenStack Compute (Nova):
  Confirmed
Status in Tempest:
  New

Bug description:
  Traceback (most recent call last):
    File "tempest/api/compute/servers/test_server_rescue.py", line 111, in 
_unrescue
  self.servers_client.wait_for_server_status(server_id, 'ACTIVE')
    File "tempest/services/compute/json/servers_client.py", line 156, in 
wait_for_server_status
  return waiters.wait_for_server_status(self, server_id, status)
    File "tempest/common/waiters.py", line 80, in wait_for_server_status
  raise exceptions.TimeoutException(message)
  TimeoutException: Request timed out
  Details: Server 802897a6-6793-4af2-9d84-8750be518380 failed to reach ACTIVE 
status within the required time (400 s). Current status: SHUTOFF.

  Sample failure: http://logs.openstack.org/51/52151/1/gate/gate-
  tempest-devstack-vm-full/6b393f5/

  Basic query for the failure string:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJAbWVzc2FnZTpcIkZBSUw6IHRlbXBlc3QuYXBpLmNvbXB1dGUuc2VydmVycy50ZXN0X3NlcnZlcl9yZXNjdWUuU2VydmVyUmVzY3VlVGVzdEpTT04udGVzdF9yZXNjdWVkX3ZtX2F0dGFjaF92b2x1bWVcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiYWxsIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM4MTk2MTIyMjkwMSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1240728/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1226412] Re: tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML.test_rescue_paused_instance

2013-12-12 Thread Sean Dague
This is a nova race bug

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
   Importance: Undecided => Medium

** Changed in: nova
   Status: New => Confirmed

** Summary changed:

- 
tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML.test_rescue_paused_instance
+ guest doesn't reach PAUSED state within 200s in the gate

** Changed in: tempest
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1226412

Title:
  guest doesn't reach PAUSED state within 200s in the gate

Status in OpenStack Compute (Nova):
  Confirmed
Status in Tempest:
  Invalid

Bug description:
  Tempest test fails
  
:tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML.test_rescue_paused_instance

  
  Traceback (most recent call last):
File "tempest/api/compute/servers/test_server_rescue.py", line 135, in 
test_rescue_paused_instance
  self.servers_client.wait_for_server_status(self.server_id, 'PAUSED')
File "tempest/services/compute/xml/servers_client.py", line 340, in 
wait_for_server_status
  return waiters.wait_for_server_status(self, server_id, status)
File "tempest/common/waiters.py", line 80, in wait_for_server_status
  raise exceptions.TimeoutException(message)
  TimeoutException: Request timed out
  Details: Server 8539b620-909c-46a6-9293-1b1add06a343 failed to reach PAUSED 
status within the required time (400 s). Current status: ACTIVE.

  
  see 
http://logs.openstack.org/55/46855/3/check/gate-tempest-devstack-vm-postgres-full/28acd2d/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1226412/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1242645] Re: Resource tracking does not take into account the current resources on the host

2013-12-12 Thread Gary Kotton
** Changed in: nova
 Assignee: Gary Kotton (garyk) => (unassigned)

** Changed in: nova
   Status: In Progress => Won't Fix

** Changed in: nova
Milestone: icehouse-2 => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1242645

Title:
  Resource tracking does not take into account the current resources on
  the host

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  The trace below is from the log file:

  2013-10-21 04:28:27.210 INFO nova.compute.resource_tracker [-] Hypervisor: 
free disk (GB): 11
  2013-10-21 04:28:27.263 AUDIT nova.compute.resource_tracker [-] Free ram 
(MB): 4898
  2013-10-21 04:28:27.263 AUDIT nova.compute.resource_tracker [-] Free disk 
(GB): 15
  2013-10-21 04:28:27.263 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 4

  In this specific case there is 4GB used on the hypervisor - used for
  image cache etc.

  
+--+-+
  | Property | Value
   |
  
+--+-+
  | hypervisor_hostname  | domain-c7(Cluster31) 
   |
  | cpu_info | {"model": ["Intel(R) Xeon(R) CPU E5-2650 0 @ 
2.00GHz"], "vendor": ["VMware, Inc."], "topology": {"cores": 4, "threads": 4}} |
  | free_disk_gb | 15   
   |
  | hypervisor_version   | 51   
   |
  | disk_available_least | None 
   |
  | local_gb | 15   
   |
  | free_ram_mb  | 4898 
   |
  | id   | 1
   |
  | vcpus_used   | 0
   |
  | hypervisor_type  | VMware vCenter Server
   |
  | local_gb_used| 0
   |
  | memory_mb_used   | 512  
   |
  | memory_mb| 5410 
   |
  | current_workload | 0
   |
  | vcpus| 4
   |
  | running_vms  | 0
   |
  | service_id   | 5
   |
  | service_host | os-devstack  
   |
  
+--+-+

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1242645/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1239923] Re: neutron doesn't use request-ids

2013-12-12 Thread Sean Dague
not a tempest bug

** No longer affects: tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1239923

Title:
  neutron doesn't use request-ids

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Just like nova and cinder, neutron should use request-ids for logging
  and to return to users.

  request-ids are used to help look up the logs for a specific request.

  Also they make logs more usable.  Here is an example of how they can
  be used. http://git.openstack.org/cgit/openstack/oslo-
  incubator/tree/openstack/common/context.py

  
etc/cinder/cinder.conf.sample:#logging_context_format_string=%(asctime)s.%(msecs)03d
  %(process)d %(levelname)s %(name)s [%(request_id)s %(user)s %(

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1239923/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1252854] Re: setUpClass ServerAddressesTest FAIL

2013-12-12 Thread Attila Fazekas
Very unlikely tempest caused the ERROR status on the server, so adding
nova.

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: tempest
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1252854

Title:
  setUpClass ServerAddressesTest FAIL

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Incomplete

Bug description:
  I encountered the following Tempest failure during the initial Jenkins
  tests for the following submission to python-cinderclient:

  https://review.openstack.org/#/c/57245/

  http://logs.openstack.org/45/57245/1/check/check-grenade-devstack-
  vm/1ad2c17/

  2013-11-19 16:07:09.167 | 
==
  2013-11-19 16:07:09.167 | FAIL: setUpClass 
(tempest.api.compute.servers.test_server_addresses.ServerAddressesTest)
  2013-11-19 16:07:09.167 | setUpClass 
(tempest.api.compute.servers.test_server_addresses.ServerAddressesTest)
  2013-11-19 16:07:09.168 | 
--
  2013-11-19 16:07:09.168 | _StringException: Traceback (most recent call last):
  2013-11-19 16:07:09.168 |   File 
"tempest/api/compute/servers/test_server_addresses.py", line 31, in setUpClass
  2013-11-19 16:07:09.169 | resp, cls.server = 
cls.create_test_server(wait_until='ACTIVE')
  2013-11-19 16:07:09.169 |   File "tempest/api/compute/base.py", line 118, in 
create_test_server
  2013-11-19 16:07:09.169 | server['id'], kwargs['wait_until'])
  2013-11-19 16:07:09.169 |   File 
"tempest/services/compute/json/servers_client.py", line 160, in 
wait_for_server_status
  2013-11-19 16:07:09.170 | extra_timeout=extra_timeout)
  2013-11-19 16:07:09.170 |   File "tempest/common/waiters.py", line 73, in 
wait_for_server_status
  2013-11-19 16:07:09.170 | raise 
exceptions.BuildErrorException(server_id=server_id)
  2013-11-19 16:07:09.171 | BuildErrorException: Server 
e4f08fa8-bf4c-4994-b5f9-97566a393baf failed to build and is in ERROR status

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1252854/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260075] Re: VMware: NotAuthenticated occurred in the call to RetrievePropertiesEx

2013-12-12 Thread Shawn Hartsock
Marked High because this impacts the CI environment we use to approve
other patches. If not for that context, I would mark this Medium.

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => High

** Changed in: nova
 Assignee: (unassigned) => Sabari Kumar Murugesan (smurugesan)

** Changed in: nova
Milestone: None => icehouse-2

** Also affects: openstack-vmwareapi-team
   Importance: Undecided
   Status: New

** Changed in: openstack-vmwareapi-team
   Status: New => Confirmed

** Changed in: openstack-vmwareapi-team
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260075

Title:
  VMware: NotAuthenticated occurred in the call to RetrievePropertiesEx

Status in OpenStack Compute (Nova):
  Confirmed
Status in The OpenStack VMwareAPI subTeam:
  Confirmed

Bug description:
  The VMware Minesweeper CI occasionally runs into this error when
  trying to boot an instance:

  2013-12-11 04:50:15.048 20785 DEBUG nova.virt.vmwareapi.driver [-] Task 
[ReconfigVM_Task] (returnval){
 value = "task-322"
 _type = "Task"
   } status: success _poll_task 
/opt/stack/nova/nova/virt/vmwareapi/driver.py:926
  Reconfigured VM instance to enable vnc on port - 5986 _set_vnc_config 
/opt/stack/nova/nova/virt/vmwareapi/vmops.py:1461
  Instance failed to spawn
  Traceback (most recent call last):
File "/opt/stack/nova/nova/compute/manager.py", line 1461, in _spawn
  block_device_info)
File "/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 628, in spawn
  admin_password, network_info, block_device_info)
File "/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 435, in spawn
  upload_folder, upload_name + ".vmdk")):
File "/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 1556, in 
_check_if_folder_file_exists
  "browser")
File "/opt/stack/nova/nova/virt/vmwareapi/vim_util.py", line 173, in 
get_dynamic_property
  property_dict = get_dynamic_properties(vim, mobj, type, [property_name])
File "/opt/stack/nova/nova/virt/vmwareapi/vim_util.py", line 179, in 
get_dynamic_properties
  obj_content = get_object_properties(vim, None, mobj, type, property_names)
File "/opt/stack/nova/nova/virt/vmwareapi/vim_util.py", line 168, in 
get_object_properties
  options=options)
File "/opt/stack/nova/nova/virt/vmwareapi/vim.py", line 187, in 
vim_request_handler
  fault_checker(response)
File "/opt/stack/nova/nova/virt/vmwareapi/error_util.py", line 99, in 
retrievepropertiesex_fault_checker
  exc_msg_list))
  VimFaultException: Error(s) NotAuthenticated occurred in the call to 
RetrievePropertiesEx

  Full logs here for a CI build where this occurred are available here:
  http://162.209.83.206/logs/35303/31/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260075/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1239891] Re: tempest.api.object_storage.test_account_services.AccountTest fails under neutron-pg-isolated

2013-12-12 Thread Attila Fazekas
** Project changed: tempest => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1239891

Title:
  tempest.api.object_storage.test_account_services.AccountTest fails
  under neutron-pg-isolated

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  http://logs.openstack.org/38/51738/1/check/check-tempest-devstack-vm-
  neutron-pg-isolated/73aad7a/console.html

  2013-10-15 00:19:04.556 | Error in atexit._run_exitfuncs:
  2013-10-15 00:19:04.556 | Traceback (most recent call last):
  2013-10-15 00:19:04.556 |   File "/usr/lib/python2.7/atexit.py", line 24, in 
_run_exitfuncs
  2013-10-15 00:19:04.557 | func(*targs, **kargs)
  2013-10-15 00:19:04.558 |   File "tempest/test.py", line 167, in 
validate_tearDownClass
  2013-10-15 00:19:04.558 | + str(at_exit_set))
  2013-10-15 00:19:04.558 | RuntimeError: tearDownClass does not calls the 
super's tearDownClass in these classes: set([])
  2013-10-15 00:19:04.559 | Error in sys.exitfunc:
  2013-10-15 00:19:04.663 | 
  2013-10-15 00:19:04.664 | process-returncode
  2013-10-15 00:19:04.664 | process-returncode ... FAIL
  2013-10-15 00:19:04.980 | 
  2013-10-15 00:19:04.981 | 
==
  2013-10-15 00:19:04.981 | FAIL: tearDownClass 
(tempest.api.object_storage.test_account_services.AccountTest)
  2013-10-15 00:19:04.981 | tearDownClass 
(tempest.api.object_storage.test_account_services.AccountTest)
  2013-10-15 00:19:04.982 | 
--
  2013-10-15 00:19:04.982 | _StringException: Traceback (most recent call last):
  2013-10-15 00:19:04.982 |   File 
"tempest/api/object_storage/test_account_services.py", line 41, in tearDownClass
  2013-10-15 00:19:04.983 | super(AccountTest, cls).tearDownClass()
  2013-10-15 00:19:04.983 |   File "tempest/api/object_storage/base.py", line 
77, in tearDownClass
  2013-10-15 00:19:04.983 | cls.isolated_creds.clear_isolated_creds()
  2013-10-15 00:19:04.984 |   File "tempest/common/isolated_creds.py", line 
453, in clear_isolated_creds
  2013-10-15 00:19:04.984 | self._clear_isolated_net_resources()
  2013-10-15 00:19:04.984 |   File "tempest/common/isolated_creds.py", line 
445, in _clear_isolated_net_resources
  2013-10-15 00:19:04.985 | self._clear_isolated_network(network['id'], 
network['name'])
  2013-10-15 00:19:04.985 |   File "tempest/common/isolated_creds.py", line 
399, in _clear_isolated_network
  2013-10-15 00:19:04.985 | net_client.delete_network(network_id)
  2013-10-15 00:19:04.985 |   File 
"tempest/services/network/json/network_client.py", line 76, in delete_network
  2013-10-15 00:19:04.986 | resp, body = self.delete(uri, self.headers)
  2013-10-15 00:19:04.986 |   File "tempest/common/rest_client.py", line 308, 
in delete
  2013-10-15 00:19:04.986 | return self.request('DELETE', url, headers)
  2013-10-15 00:19:04.987 |   File "tempest/common/rest_client.py", line 436, 
in request
  2013-10-15 00:19:04.987 | resp, resp_body)
  2013-10-15 00:19:04.987 |   File "tempest/common/rest_client.py", line 522, 
in _error_checker
  2013-10-15 00:19:04.988 | raise exceptions.ComputeFault(message)
  2013-10-15 00:19:04.988 | ComputeFault: Got compute fault
  2013-10-15 00:19:04.988 | Details: {"NeutronError": "Request Failed: internal 
server error while processing your request."}
  2013-10-15 00:19:04.988 | 
  2013-10-15 00:19:04.989 | 
  2013-10-15 00:19:04.989 | 
==
  2013-10-15 00:19:04.989 | FAIL: process-returncode
  2013-10-15 00:19:04.990 | process-returncode
  2013-10-15 00:19:04.990 | 
--
  2013-10-15 00:19:04.990 | _StringException: Binary content:
  2013-10-15 00:19:04.991 |   traceback (test/plain; charset="utf8")
  2013-10-15 00:19:04.991 | 
  2013-10-15 00:19:04.991 | 
  2013-10-15 00:19:04.991 | 
--

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1239891/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1230407] Re: VMs can't progress through state changes because Neutron is deadlocking on it's database queries, and thus leaving networks in inconsistent states

2013-12-12 Thread Sean Dague
** No longer affects: tempest

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1230407

Title:
  VMs can't progress through state changes because Neutron is
  deadlocking on it's database queries, and thus leaving networks in
  inconsistent states

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  This is most often seen with the "State change timeout exceeded" in
  the tempest logs.

  2013-09-25 16:03:28.319 | FAIL: 
tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_run_stop_terminate_instance_with_tags[gate,smoke]
  2013-09-25 16:03:28.319 | 
tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_run_stop_terminate_instance_with_tags[gate,smoke]
  2013-09-25 16:03:28.319 | 
--
  2013-09-25 16:03:28.319 | _StringException: Empty attachments:
  2013-09-25 16:03:28.319 |   stderr
  2013-09-25 16:03:28.320 |   stdout
  2013-09-25 16:03:28.320 |
  2013-09-25 16:03:28.320 | pythonlogging:'': {{{2013-09-25 15:49:34,792 state: 
pending}}}
  2013-09-25 16:03:28.320 |
  2013-09-25 16:03:28.320 | Traceback (most recent call last):
  2013-09-25 16:03:28.320 |   File 
"tempest/thirdparty/boto/test_ec2_instance_run.py", line 175, in 
test_run_stop_terminate_instance_with_tags
  2013-09-25 16:03:28.320 | self.assertInstanceStateWait(instance, 
"running")
  2013-09-25 16:03:28.321 |   File "tempest/thirdparty/boto/test.py", line 356, 
in assertInstanceStateWait
  2013-09-25 16:03:28.321 | state = self.waitInstanceState(lfunction, 
wait_for)
  2013-09-25 16:03:28.321 |   File "tempest/thirdparty/boto/test.py", line 341, 
in waitInstanceState
  2013-09-25 16:03:28.321 | self.valid_instance_state)
  2013-09-25 16:03:28.321 |   File "tempest/thirdparty/boto/test.py", line 331, 
in state_wait_gone
  2013-09-25 16:03:28.321 | state = state_wait(lfunction, final_set, 
valid_set)
  2013-09-25 16:03:28.322 |   File "tempest/thirdparty/boto/utils/wait.py", 
line 57, in state_wait
  2013-09-25 16:03:28.322 | (dtime, final_set, status))
  2013-09-25 16:03:28.322 | AssertionError: State change timeout 
exceeded!(400s) While waitingfor set(['running', '_GONE']) at "pending"

  full log: http://logs.openstack.org/38/47438/1/gate/gate-tempest-
  devstack-vm-neutron/93db162/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1230407/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1230354] Re: tempest.scenario.test_minimum_basic.TestMinimumBasicScenario.test_minimum_basic_scenario fails sporadically

2013-12-12 Thread Sean Dague
This bug is basically useless. It doesn't get far enough to get to any
root causes. I'm marking this as invalid and the narrower bugs can be
refiled.

** No longer affects: cinder

** No longer affects: nova

** Changed in: tempest
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1230354

Title:
  
tempest.scenario.test_minimum_basic.TestMinimumBasicScenario.test_minimum_basic_scenario
  fails sporadically

Status in Tempest:
  Invalid

Bug description:
  See: http://logs.openstack.org/97/44097/18/check/gate-tempest-
  devstack-vm-postgres-full/270e611/console.html

  2013-09-25 15:15:37.447 | 
==
  2013-09-25 15:15:37.463 | FAIL: 
tempest.scenario.test_minimum_basic.TestMinimumBasicScenario.test_minimum_basic_scenario[compute,image,network,volume]
  2013-09-25 15:15:37.464 | 
tempest.scenario.test_minimum_basic.TestMinimumBasicScenario.test_minimum_basic_scenario[compute,image,network,volume]
  2013-09-25 15:15:37.464 | 
--
  2013-09-25 15:15:37.464 | _StringException: Empty attachments:
  2013-09-25 15:15:37.464 |   stderr
  2013-09-25 15:15:37.464 |   stdout
  2013-09-25 15:15:37.464 | 
  2013-09-25 15:15:37.465 | pythonlogging:'': {{{
  2013-09-25 15:15:37.465 | 2013-09-25 15:07:17,364 Starting new HTTP 
connection (1): 127.0.0.1
  2013-09-25 15:15:37.465 | 2013-09-25 15:07:17,628 Starting new HTTP 
connection (1): 127.0.0.1
  2013-09-25 15:15:37.465 | 2013-09-25 15:07:29,212 Starting new HTTP 
connection (1): 127.0.0.1
  .
  .
  .
  2013-09-25 15:15:37.619 | 2013-09-25 15:14:11,445 Starting new HTTP 
connection (1): 127.0.0.1
  2013-09-25 15:15:37.619 | 2013-09-25 15:14:12,529 Starting new HTTP 
connection (1): 127.0.0.1
  2013-09-25 15:15:37.620 | 2013-09-25 15:14:13,617 Starting new HTTP 
connection (1): 127.0.0.1
  2013-09-25 15:15:37.620 | }}}
  2013-09-25 15:15:37.620 | 
  2013-09-25 15:15:37.620 | Traceback (most recent call last):
  2013-09-25 15:15:37.621 |   File "tempest/scenario/test_minimum_basic.py", 
line 158, in test_minimum_basic_scenario
  2013-09-25 15:15:37.621 | self.nova_volume_attach()
  2013-09-25 15:15:37.621 |   File "tempest/scenario/test_minimum_basic.py", 
line 120, in nova_volume_attach
  2013-09-25 15:15:37.622 | self._wait_for_volume_status('in-use')
  2013-09-25 15:15:37.622 |   File "tempest/scenario/test_minimum_basic.py", 
line 48, in _wait_for_volume_status
  2013-09-25 15:15:37.622 | self.volume_client.volumes, volume_id, status)
  2013-09-25 15:15:37.622 |   File "tempest/scenario/manager.py", line 290, in 
status_timeout
  2013-09-25 15:15:37.622 | self._status_timeout(things, thing_id, 
expected_status=expected_status)
  2013-09-25 15:15:37.623 |   File "tempest/scenario/manager.py", line 341, in 
_status_timeout
  2013-09-25 15:15:37.623 | raise exceptions.TimeoutException(message)
  2013-09-25 15:15:37.623 | TimeoutException: Request timed out
  2013-09-25 15:15:37.623 | Details: Timed out waiting for thing 
b5781471-5ee9-44de-944b-28c10a793b31   to become in-use

To manage notifications about this bug go to:
https://bugs.launchpad.net/tempest/+bug/1230354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1232303] Re: FAIL: tempest test_large_ops_scenario - failed to get to expected status. In ERROR state.

2013-12-12 Thread Sean Dague
** No longer affects: tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1232303

Title:
  FAIL: tempest test_large_ops_scenario - failed to get to expected
  status. In ERROR state.

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  http://logs.openstack.org/13/48513/2/check/check-tempest-devstack-vm-
  large-ops/8ce3344/console.html

  2013-09-27 21:08:58.498 | 
==
  2013-09-27 21:08:58.498 | FAIL: 
tempest.scenario.test_large_ops.TestLargeOpsScenario.test_large_ops_scenario[compute,image]
  2013-09-27 21:08:58.498 | tags: worker-0
  2013-09-27 21:08:58.498 | 
--
  2013-09-27 21:08:58.498 | Empty attachments:
  2013-09-27 21:08:58.498 |   stderr
  2013-09-27 21:08:58.499 |   stdout
  2013-09-27 21:08:58.499 | 
  2013-09-27 21:08:58.499 | pythonlogging:'': {{{
  2013-09-27 21:08:58.499 | 2013-09-27 21:04:54,743 Starting new HTTP 
connection (1): 127.0.0.1
  2013-09-27 21:08:58.499 | 2013-09-27 21:04:54,864 Starting new HTTP 
connection (1): 127.0.0.1
  2013-09-27 21:08:58.499 | }}}
  2013-09-27 21:08:58.500 | 
  2013-09-27 21:08:58.500 | Traceback (most recent call last):
  2013-09-27 21:08:58.500 |   File "tempest/scenario/test_large_ops.py", line 
105, in test_large_ops_scenario
  2013-09-27 21:08:58.500 | self.nova_boot()
  2013-09-27 21:08:58.500 |   File "tempest/scenario/test_large_ops.py", line 
98, in nova_boot
  2013-09-27 21:08:58.500 | self._wait_for_server_status('ACTIVE')
  2013-09-27 21:08:58.501 |   File "tempest/scenario/test_large_ops.py", line 
42, in _wait_for_server_status
  2013-09-27 21:08:58.501 | self.compute_client.servers, server.id, status)
  2013-09-27 21:08:58.501 |   File "tempest/scenario/manager.py", line 290, in 
status_timeout
  2013-09-27 21:08:58.501 | self._status_timeout(things, thing_id, 
expected_status=expected_status)
  2013-09-27 21:08:58.501 |   File "tempest/scenario/manager.py", line 338, in 
_status_timeout
  2013-09-27 21:08:58.501 | self.config.compute.build_interval):
  2013-09-27 21:08:58.502 |   File "tempest/test.py", line 237, in 
call_until_true
  2013-09-27 21:08:58.502 | if func():
  2013-09-27 21:08:58.502 |   File "tempest/scenario/manager.py", line 329, in 
check_status
  2013-09-27 21:08:58.502 | raise exceptions.BuildErrorException(message)
  2013-09-27 21:08:58.502 | BuildErrorException: Server %(server_id)s failed to 
build and is in ERROR status
  2013-09-27 21:08:58.503 | Details:  failed to get 
to expected status.   In ERROR state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1232303/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1232971] Re: tempest gating error: test_run_stop_terminate_instance_with_tags

2013-12-12 Thread Sean Dague
** Changed in: tempest
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1232971

Title:
  tempest gating error: test_run_stop_terminate_instance_with_tags

Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in Tempest:
  Fix Released

Bug description:
  2013-09-29 21:56:27.035 | 
==
  2013-09-29 21:56:27.035 | FAIL: 
tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_run_stop_terminate_instance_with_tags[gate,smoke]
  2013-09-29 21:56:27.035 | 
tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_run_stop_terminate_instance_with_tags[gate,smoke]
  2013-09-29 21:56:27.035 | 
--
  2013-09-29 21:56:27.036 | _StringException: Empty attachments:
  2013-09-29 21:56:27.036 |   stderr
  2013-09-29 21:56:27.036 |   stdout
  2013-09-29 21:56:27.036 | 
  2013-09-29 21:56:27.036 | pythonlogging:'': {{{2013-09-29 21:42:26,545 state: 
pending}}}
  2013-09-29 21:56:27.036 | 
  2013-09-29 21:56:27.037 | Traceback (most recent call last):
  2013-09-29 21:56:27.037 |   File 
"tempest/thirdparty/boto/test_ec2_instance_run.py", line 175, in 
test_run_stop_terminate_instance_with_tags
  2013-09-29 21:56:27.037 | self.assertInstanceStateWait(instance, 
"running")
  2013-09-29 21:56:27.037 |   File "tempest/thirdparty/boto/test.py", line 356, 
in assertInstanceStateWait
  2013-09-29 21:56:27.037 | state = self.waitInstanceState(lfunction, 
wait_for)
  2013-09-29 21:56:27.037 |   File "tempest/thirdparty/boto/test.py", line 341, 
in waitInstanceState
  2013-09-29 21:56:27.037 | self.valid_instance_state)
  2013-09-29 21:56:27.038 |   File "tempest/thirdparty/boto/test.py", line 331, 
in state_wait_gone
  2013-09-29 21:56:27.038 | state = state_wait(lfunction, final_set, 
valid_set)
  2013-09-29 21:56:27.038 |   File "tempest/thirdparty/boto/utils/wait.py", 
line 57, in state_wait
  2013-09-29 21:56:27.038 | (dtime, final_set, status))
  2013-09-29 21:56:27.038 | AssertionError: State change timeout 
exceeded!(400s) While waitingfor set(['running', '_GONE']) at "pending"

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1232971/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1239891] [NEW] tempest.api.object_storage.test_account_services.AccountTest fails under neutron-pg-isolated

2013-12-12 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

http://logs.openstack.org/38/51738/1/check/check-tempest-devstack-vm-
neutron-pg-isolated/73aad7a/console.html

2013-10-15 00:19:04.556 | Error in atexit._run_exitfuncs:
2013-10-15 00:19:04.556 | Traceback (most recent call last):
2013-10-15 00:19:04.556 |   File "/usr/lib/python2.7/atexit.py", line 24, in 
_run_exitfuncs
2013-10-15 00:19:04.557 | func(*targs, **kargs)
2013-10-15 00:19:04.558 |   File "tempest/test.py", line 167, in 
validate_tearDownClass
2013-10-15 00:19:04.558 | + str(at_exit_set))
2013-10-15 00:19:04.558 | RuntimeError: tearDownClass does not calls the 
super's tearDownClass in these classes: set([])
2013-10-15 00:19:04.559 | Error in sys.exitfunc:
2013-10-15 00:19:04.663 | 
2013-10-15 00:19:04.664 | process-returncode
2013-10-15 00:19:04.664 | process-returncode ... FAIL
2013-10-15 00:19:04.980 | 
2013-10-15 00:19:04.981 | 
==
2013-10-15 00:19:04.981 | FAIL: tearDownClass 
(tempest.api.object_storage.test_account_services.AccountTest)
2013-10-15 00:19:04.981 | tearDownClass 
(tempest.api.object_storage.test_account_services.AccountTest)
2013-10-15 00:19:04.982 | 
--
2013-10-15 00:19:04.982 | _StringException: Traceback (most recent call last):
2013-10-15 00:19:04.982 |   File 
"tempest/api/object_storage/test_account_services.py", line 41, in tearDownClass
2013-10-15 00:19:04.983 | super(AccountTest, cls).tearDownClass()
2013-10-15 00:19:04.983 |   File "tempest/api/object_storage/base.py", line 77, 
in tearDownClass
2013-10-15 00:19:04.983 | cls.isolated_creds.clear_isolated_creds()
2013-10-15 00:19:04.984 |   File "tempest/common/isolated_creds.py", line 453, 
in clear_isolated_creds
2013-10-15 00:19:04.984 | self._clear_isolated_net_resources()
2013-10-15 00:19:04.984 |   File "tempest/common/isolated_creds.py", line 445, 
in _clear_isolated_net_resources
2013-10-15 00:19:04.985 | self._clear_isolated_network(network['id'], 
network['name'])
2013-10-15 00:19:04.985 |   File "tempest/common/isolated_creds.py", line 399, 
in _clear_isolated_network
2013-10-15 00:19:04.985 | net_client.delete_network(network_id)
2013-10-15 00:19:04.985 |   File 
"tempest/services/network/json/network_client.py", line 76, in delete_network
2013-10-15 00:19:04.986 | resp, body = self.delete(uri, self.headers)
2013-10-15 00:19:04.986 |   File "tempest/common/rest_client.py", line 308, in 
delete
2013-10-15 00:19:04.986 | return self.request('DELETE', url, headers)
2013-10-15 00:19:04.987 |   File "tempest/common/rest_client.py", line 436, in 
request
2013-10-15 00:19:04.987 | resp, resp_body)
2013-10-15 00:19:04.987 |   File "tempest/common/rest_client.py", line 522, in 
_error_checker
2013-10-15 00:19:04.988 | raise exceptions.ComputeFault(message)
2013-10-15 00:19:04.988 | ComputeFault: Got compute fault
2013-10-15 00:19:04.988 | Details: {"NeutronError": "Request Failed: internal 
server error while processing your request."}
2013-10-15 00:19:04.988 | 
2013-10-15 00:19:04.989 | 
2013-10-15 00:19:04.989 | 
==
2013-10-15 00:19:04.989 | FAIL: process-returncode
2013-10-15 00:19:04.990 | process-returncode
2013-10-15 00:19:04.990 | 
--
2013-10-15 00:19:04.990 | _StringException: Binary content:
2013-10-15 00:19:04.991 |   traceback (test/plain; charset="utf8")
2013-10-15 00:19:04.991 | 
2013-10-15 00:19:04.991 | 
2013-10-15 00:19:04.991 | 
--

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
tempest.api.object_storage.test_account_services.AccountTest fails under 
neutron-pg-isolated
https://bugs.launchpad.net/bugs/1239891
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1216903] Re: logical_resource_id disappeared in favor of resource_name

2013-12-12 Thread Sean Dague
** Changed in: tempest
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1216903

Title:
  logical_resource_id  disappeared in favor of resource_name

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Python client library for heat:
  Fix Released
Status in Tempest:
  Fix Released

Bug description:
  impact from Heat side :

  https://review.openstack.org/#/c/43391/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1216903/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1226943] Re: Need Use built-in print() function instead of print statement

2013-12-12 Thread Sean Dague
** No longer affects: tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1226943

Title:
  Need Use built-in  print() function  instead of print  statement

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Identity (Keystone):
  In Progress
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  In python 3 print statement is not supported, so we should  use only print() 
functions.
  built-in function was introduce in  python 2.6
  http://www.python.org/dev/peps/pep-3105/

  Note :This function is not normally available as a built-in since the name 
print is recognized as the print statement. 
  To disable the statement and use the print() function, use this future 
statement at the top of your module:
  from __future__ import print_function

  http://docs.python.org/2/library/functions.html#print

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1226943/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1006725] Re: Incorrect error returned during Create Image and multi byte characters used for Image name

2013-12-12 Thread Sean Dague
Definitely not fixed on the nova side. Attempts to unskip the bug
generated issues.

** Changed in: nova
   Status: Invalid => Confirmed

** Changed in: nova
   Importance: Low => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1006725

Title:
  Incorrect error returned during Create Image and multi byte characters
  used for Image name

Status in OpenStack Compute (Nova):
  Confirmed
Status in Tempest:
  In Progress

Bug description:
  Our tempest tests that checks for 400 Bad Request return code fails
  with a ComputeFault instead.

  Pass multi-byte character image name during Create Image
  Actual Response Code: ComputeFault, 500 
  Expected Response Code: 400 Bad Request

  
  Return an error if the server name has a multi-byte character ... FAIL

  ==
  FAIL: Return an error if the server name has a multi-byte character
  --
  Traceback (most recent call last):
File "/opt/stack/tempest/tests/test_images.py", line 251, in 
test_create_image_specify_multibyte_character_server_name
  self.fail("Should return 400 Bad Request if multi byte characters"
  AssertionError: Should return 400 Bad Request if multi byte characters are 
used for image name
   >> begin captured logging << 
  tempest.config: INFO: Using tempest config file 
/opt/stack/tempest/etc/tempest.conf
  tempest.common.rest_client: ERROR: Request URL: 
http://10.2.3.164:8774/v2/1aeac1cfbfdd43c2845b2cb3a4f15790/images/24ceff93-1af3-41ab-802f-9fc4d8b90b69
  tempest.common.rest_client: ERROR: Request Body: None
  tempest.common.rest_client: ERROR: Response Headers: {'date': 'Thu, 31 May 
2012 06:02:33 GMT', 'status': '404', 'content-length': '62', 'content-type': 
'application/json; charset=UTF-8', 'x-compute-request-id': 
'req-7a15d284-e934-47a1-87f4-7746e949c7a2'}
  tempest.common.rest_client: ERROR: Response Body: {"itemNotFound": 
{"message": "Image not found.", "code": 404}}
  tempest.common.rest_client: ERROR: Request URL: 
http://10.2.3.164:8774/v2/1aeac1cfbfdd43c2845b2cb3a4f15790/servers/ecb51dfb-493d-4ef8-9178-1adc3d96a04d/action
  tempest.common.rest_client: ERROR: Request Body: {"createImage": {"name": 
"\ufeff43802479847"}}
  tempest.common.rest_client: ERROR: Response Headers: {'date': 'Thu, 31 May 
2012 06:02:44 GMT', 'status': '500', 'content-length': '128', 'content-type': 
'application/json; charset=UTF-8', 'x-compute-request-id': 
'req-1a9505f5-4dfc-44e7-b04a-f8daec0f956e'}
  tempest.common.rest_client: ERROR: Response Body: {u'computeFault': 
{u'message': u'The server has either erred or is incapable of performing the 
requested operation.', u'code': 500}}
  - >> end captured logging << -

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1006725/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1152623] Re: RFC2616 section 9.7 status code vs. nova server delete

2013-12-12 Thread Sean Dague
Until this is changed in nova, it's not actually appropriate to have a
tempest issue

** No longer affects: tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1152623

Title:
  RFC2616 section 9.7 status code vs. nova server delete

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  In REST client implementation is common good practice, when:
  - request causes an synchronous and asynchronous effect , and
  - the synchronous operation has any immediately visible effect ie. immediate 
subsequent request showing any change,
  we should emphasize the synchronous behavior in the responses (Status code)  
(Or responding in way which does not distinguish the two cases).

  However if the HTTP method is DELETE, the rule is the opposite! 
  If the resource on the request URL does not deleted the service MUST NOT 
response with 204.

  "
 A successful response SHOULD be 200 (OK) if the response includes an
 entity describing the status, 202 (Accepted) if the action has not
 yet been enacted, or 204 (No Content) if the action has been enacted
 but the response does not include an entity.
  " by RFC2616 section 9.7

  It means if a DELETE request responded with 204 status code, I MUST
  get 404 in an immediate subsequent request, unless concurrent
  operation recreated the resource.

  
  $ nova --debug delete ab0ebda6-2c21-4258-8934-1005b970fee5 ; nova --debug 
show ab0ebda6-2c21-4258-8934-1005b970fee5

  Part of the output in the received order:
  -
  REQ: curl -i 
http://10.34.69.149:8774/v2/89a38fe6d3194864995ab0872905a65e/servers/ab0ebda6-2c21-4258-8934-1005b970fee5
 -X DELETE -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient" -H 
"Accept: application/json" -H "X-Auth-Token: c35f5783528d4131bf100604b2fabd6c"

  send: u'DELETE 
/v2/89a38fe6d3194864995ab0872905a65e/servers/ab0ebda6-2c21-4258-8934-1005b970fee5
 HTTP/1.1\r\nHost: 10.34.69.149:8774\r\nx-auth-project-id: 
admin\r\nx-auth-token: c35f5783528d4131bf100604b2fabd6c\r\naccept-encoding: 
gzip, deflate\r\naccept: application/json\r\nuser-agent: 
python-novaclient\r\n\r\n'
  reply: 'HTTP/1.1 204 No Content\r\n'
  header: Content-Length: 0
  header: X-Compute-Request-Id: req-53e3503a-8d73-4ffc-ba43-4bd5659a9e22
  header: Content-Type: application/json
  header: Date: Sat, 02 Mar 2013 18:26:21 GMT
  RESP:{'date': 'Sat, 02 Mar 2013 18:26:21 GMT', 'status': '204', 
'content-length': '0', 'content-type': 'application/json', 
'x-compute-request-id': 'req-53e3503a-8d73-4ffc-ba43-4bd5659a9e22'} 
  -
  REQ: curl -i 
http://10.34.69.149:8774/v2/89a38fe6d3194864995ab0872905a65e/servers/ab0ebda6-2c21-4258-8934-1005b970fee5
 -X GET -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient" -H 
"Accept: application/json" -H "X-Auth-Token: f74d6c7226c14915a26a81b540d43f3b"

  connect: (10.34.69.149, 8774)
  send: u'GET 
/v2/89a38fe6d3194864995ab0872905a65e/servers/ab0ebda6-2c21-4258-8934-1005b970fee5
 HTTP/1.1\r\nHost: 10.34.69.149:8774\r\nx-auth-project-id: 
admin\r\nx-auth-token: f74d6c7226c14915a26a81b540d43f3b\r\naccept-encoding: 
gzip, deflate\r\naccept: application/json\r\nuser-agent: 
python-novaclient\r\n\r\n'
  reply: 'HTTP/1.1 200 OK\r\n'
  header: X-Compute-Request-Id: req-80c97c68-0b44-4650-b027-84a85ee04b86
  header: Content-Type: application/json
  header: Content-Length: 1502
  header: Date: Sat, 02 Mar 2013 18:26:21 GMT
  RESP:{'status': '200', 'content-length': '1502', 'content-location': 
u'http://10.34.69.149:8774/v2/89a38fe6d3194864995ab0872905a65e/servers/ab0ebda6-2c21-4258-8934-1005b970fee5',
 'x-compute-request-id': 'req-80c97c68-0b44-4650-b027-84a85ee04b86', 'date': 
'Sat, 02 Mar 2013 18:26:21 GMT', 'content-type': 'application/json'} {"server": 
{"status": "ACTIVE", "updated": "2013-03-02T18:26:21Z", "hostId": 
"31bdffcdffd5b869b87c9be3cdd700e29c4a08286d6d306622b4815a", 
"OS-EXT-SRV-ATTR:host": "new32.lithium.rhev.lab.eng.brq.redhat.com", 
"addresses": {"novanetwork": [{"version": 4, "addr": "192.168.32.2"}]}, 
"links": [{"href": 
"http://10.34.69.149:8774/v2/89a38fe6d3194864995ab0872905a65e/servers/ab0ebda6-2c21-4258-8934-1005b970fee5";,
 "rel": "self"}, {"href": 
"http://10.34.69.149:8774/89a38fe6d3194864995ab0872905a65e/servers/ab0ebda6-2c21-4258-8934-1005b970fee5";,
 "rel": "bookmark"}], "key_name": null, "image": {"id": 
"12e9c131-aaf4-4f73-9659-ed2da9759cd2", "links": [{"href": "http://10.34.69.149:
 
8774/89a38fe6d3194864995ab0872905a65e/images/12e9c131-aaf4-4f73-9659-ed2da9759cd2",
 "rel": "bookmark"}]}, "OS-EXT-STS:task_state": "deleting", 
"OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": 
"instance-0003", "OS-EXT-SRV-ATTR:hypervisor_hostname": 
"new32.lithium.rhev.lab.eng.brq.redhat.com", "flavor": {"id": "1", "links": 
[{"href": 
"http://10.34.69

[Yahoo-eng-team] [Bug 1259542] Re: Send OS distribution on API headers

2013-12-12 Thread Dolph Mathews
Moved to oslo as there's nothing keystone-specific about this.

** Changed in: keystone
   Importance: Undecided => Wishlist

** Project changed: keystone => oslo

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1259542

Title:
  Send OS distribution on API headers

Status in Oslo - a Library of Common OpenStack Code:
  In Progress

Bug description:
  It should be interesting to send OS distribution in API headers, to be
  able to detect the OS that is serving API calls and collect that on
  stats.

To manage notifications about this bug go to:
https://bugs.launchpad.net/oslo/+bug/1259542/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1224518] Re: test_reboot_server_hard fails sporadically in swift check jobs

2013-12-12 Thread Attila Fazekas
10 results  in logstash with  "Current status: HARD_REBOOT" .

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1224518

Title:
  test_reboot_server_hard fails sporadically in swift check jobs

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  New

Bug description:
  See: http://logs.openstack.org/46/46146/2/check/gate-tempest-devstack-
  vm-postgres-full/b2712f1/console.html

  2013-09-12 04:43:17.625 | 
==
  2013-09-12 04:43:17.649 | FAIL: 
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_reboot_server_hard[gate,smoke]
  2013-09-12 04:43:17.651 | 
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_reboot_server_hard[gate,smoke]
  2013-09-12 04:43:17.652 | 
--
  2013-09-12 04:43:17.652 | _StringException: Empty attachments:
  2013-09-12 04:43:17.652 |   stderr
  2013-09-12 04:43:17.652 |   stdout
  2013-09-12 04:43:17.653 | 
  2013-09-12 04:43:17.653 | pythonlogging:'': {{{
  2013-09-12 04:43:17.653 | 2013-09-12 04:16:55,739 Request: GET 
http://127.0.0.1:8774/v2/83ed6f49279b4292a00b32397d2f52fb/servers/8ad0ad9a-3975-486f-94b4-af1c89b51aaf
  2013-09-12 04:43:17.654 | 2013-09-12 04:16:55,806 Response Status: 200
  2013-09-12 04:43:17.654 | 2013-09-12 04:16:55,806 Nova request id: 
req-cdc6b1fc-bcf2-4e9c-bea1-8bf935993cbd
  2013-09-12 04:43:17.654 | 2013-09-12 04:16:55,807 Request: POST 
http://127.0.0.1:8774/v2/83ed6f49279b4292a00b32397d2f52fb/servers/8ad0ad9a-3975-486f-94b4-af1c89b51aaf/action
  2013-09-12 04:43:17.655 | 2013-09-12 04:16:55,917 Response Status: 202
  2013-09-12 04:43:17.655 | 2013-09-12 04:16:55,917 Nova request id: 
req-3af37dd3-0ddc-4daa-aa6f-6958a5073cc4
  2013-09-12 04:43:17.655 | 2013-09-12 04:16:55,918 Request: GET 
http://127.0.0.1:8774/v2/83ed6f49279b4292a00b32397d2f52fb/servers/8ad0ad9a-3975-486f-94b4-af1c89b51aaf
  2013-09-12 04:43:17.655 | 2013-09-12 04:16:55,986 Response Status: 200
  2013-09-12 04:43:17.656 | 2013-09-12 04:16:55,986 Nova request id: 
req-a7298d3e-167c-4c8f-9506-6064ba811e5b

  .
  .
  .

  2013-09-12 04:43:17.976 | 2013-09-12 04:23:35,773 Request: GET 
http://127.0.0.1:8774/v2/83ed6f49279b4292a00b32397d2f52fb/servers/8ad0ad9a-3975-486f-94b4-af1c89b51aaf
  2013-09-12 04:43:17.976 | 2013-09-12 04:23:35,822 Response Status: 200
  2013-09-12 04:43:17.976 | 2013-09-12 04:23:35,823 Nova request id: 
req-a122aded-b49b-4847-9920-b2b8b09bc0ca
  2013-09-12 04:43:17.976 | }}}
  2013-09-12 04:43:17.977 | 
  2013-09-12 04:43:17.977 | Traceback (most recent call last):
  2013-09-12 04:43:17.978 |   File 
"tempest/api/compute/servers/test_server_actions.py", line 81, in 
test_reboot_server_hard
  2013-09-12 04:43:17.978 | 
self.client.wait_for_server_status(self.server_id, 'ACTIVE')
  2013-09-12 04:43:17.979 |   File 
"tempest/services/compute/json/servers_client.py", line 176, in 
wait_for_server_status
  2013-09-12 04:43:17.979 | raise exceptions.TimeoutException(message)
  2013-09-12 04:43:17.979 | TimeoutException: Request timed out
  2013-09-12 04:43:17.980 | Details: Server 
8ad0ad9a-3975-486f-94b4-af1c89b51aaf failed to reach ACTIVE status within the 
required time (400 s). Current status: HARD_REBOOT.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1224518/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1153926] Re: flavor show shouldn't read deleted flavors.

2013-12-12 Thread Sean Dague
** No longer affects: tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1153926

Title:
  flavor show shouldn't read deleted flavors.

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Compute (Nova):
  In Progress
Status in Python client library for Nova:
  In Progress

Bug description:
  An instance type is created by:

  return db.instance_type_create(context.get_admin_context(), kwargs)

  which uses the read_deleted="no" from the admin context.

  This means, as seen in nova/tests/test_instance_types.py:

  def test_read_deleted_false_converting_flavorid(self):
  """
  Ensure deleted instance types are not returned when not needed (for
  example when creating a server and attempting to translate from
  flavorid to instance_type_id.
  """
  instance_types.create("instance_type1", 256, 1, 120, 100, "test1")
  instance_types.destroy("instance_type1")
  instance_types.create("instance_type1_redo", 256, 1, 120, 100, "test1")

  instance_type = instance_types.get_instance_type_by_flavor_id(
  "test1", read_deleted="no")
  self.assertEqual("instance_type1_redo", instance_type["name"])

  flavors with colliding ids can exist in the database.

  From the test we see this looks intended, however it results in
  undesirable results if we consider the following scenario.

  For 'show' in the flavors api, it uses read_deleted="yes". The reason
  for this is if a vm was created in the past with a now-deleted flavor,
  'nova show' can still show the flavor name that was specified for that
  vm creation. The flavor name is retrieved using the flavor id stored
  with the instance.

  Well, if there are colliding flavor ids in the database, the first of
  the duplicates will be picked, and it may not be the correct flavor
  for the vm.

  This leads me to believe that maybe at flavor create time, colliding
  ids should not be allowed, i.e. use

  return db.instance_type_create(context.get_admin_context(read_deleted="yes"),
 kwargs)

  to prevent the possibility of colliding flavor ids.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1153926/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258601] Re: nova.network.manager: Unable to release because vif doesn't exist.

2013-12-12 Thread David Kranz
*** This bug is a duplicate of bug 1258848 ***
https://bugs.launchpad.net/bugs/1258848

Please include a pointer to the log file for such reports. According to
logstash this has hit 48 times in the last two weeks which is a very low
failure rate. Ideally flaky bugs like this would be fixed. If the nova
team wants to silence this a patch can be submitted to the whitelist in
tempest.

** Changed in: tempest
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1258601

Title:
  nova.network.manager: Unable to release  because vif doesn't
  exist.

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Invalid

Bug description:
  This error shows un in nova-network log.

  Not sure if it needs to be whitelisted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1258601/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260359] [NEW] Adding a member to a LBaaS pool when there are no servers available it shows a "Success: Added member(s)." message.

2013-12-12 Thread alejandro emanuel paredes
Public bug reported:

Steps to reproduce:
1) Make sure there are no running instances
2) Go to "Load Balancers" tab / "Add Member"
3) In the "Add Member" window click on "Add"

Issue: No members are added and a "Success: Added member(s)." is shown.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1260359

Title:
  Adding a member to a LBaaS pool when there are no servers available it
  shows a "Success: Added member(s)." message.

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps to reproduce:
  1) Make sure there are no running instances
  2) Go to "Load Balancers" tab / "Add Member"
  3) In the "Add Member" window click on "Add"

  Issue: No members are added and a "Success: Added member(s)." is
  shown.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1260359/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258682] Re: timeout causing gate-tempest-dsvm-full to fail

2013-12-12 Thread Sean Dague
If the fix is increasing the timeout in the gate, it's not a tempest
bug. It looks like in this case libvirt went off the rails, so nova is
probably a good bug choice

** Also affects: openstack-ci
   Importance: Undecided
   Status: New

** Changed in: tempest
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1258682

Title:
  timeout causing gate-tempest-dsvm-full to fail

Status in OpenStack Compute (Nova):
  New
Status in OpenStack Core Infrastructure:
  New
Status in Tempest:
  Invalid

Bug description:
  This has happened several times. A recent example is in
  https://jenkins02.openstack.org/job/gate-tempest-dsvm-full/775/console

  There are several mentions of FAIL in the logs, but since the job
  timed out, no console logs were saved.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1258682/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260249] Re: migration-list: 'unicode' object has no attribute 'iteritems'

2013-12-12 Thread Joe Gordon
** Changed in: nova
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260249

Title:
  migration-list: 'unicode' object has no attribute 'iteritems'

Status in OpenStack Compute (Nova):
  New
Status in Python client library for Nova:
  In Progress

Bug description:
  There is an AttributeError when we try to use the command "nova
  migration-list"

  Traceback (most recent call last):
File "/opt/stack/python-novaclient/novaclient/shell.py", line 721, in main
  OpenStackComputeShell().main(map(strutils.safe_decode, sys.argv[1:]))
File "/opt/stack/python-novaclient/novaclient/shell.py", line 657, in main
  args.func(self.cs, args)
File "/opt/stack/python-novaclient/novaclient/v1_1/contrib/migrations.py", 
line 71, in do_migration_list
  args.cell_name))
File "/opt/stack/python-novaclient/novaclient/v1_1/contrib/migrations.py", 
line 53, in list
  return self._list("/os-migrations%s" % query_string, "migrations")
File "/opt/stack/python-novaclient/novaclient/base.py", line 80, in _list
  for res in data if res]
File "/opt/stack/python-novaclient/novaclient/base.py", line 426, in 
__init__
  self._add_details(info)
File "/opt/stack/python-novaclient/novaclient/base.py", line 449, in 
_add_details
  for (k, v) in six.iteritems(info):
File "/usr/local/lib/python2.7/dist-packages/six.py", line 439, in iteritems
  return iter(getattr(d, _iteritems)(**kw))
  AttributeError: 'unicode' object has no attribute 'iteritems'
  ERROR: 'unicode' object has no attribute 'iteritems'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260249/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259553] Re: ListServersNegativeTestXML flakey ERROR generation

2013-12-12 Thread Sean Dague
This isn't a tempest bug, because there wouldnt' be a code fix in
tempest to fix it

** Changed in: tempest
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1259553

Title:
  ListServersNegativeTestXML flakey ERROR generation

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Invalid

Bug description:
  Looks like ListServersNegativeTestXML can trigger a race in Nova which
  causes an ERROR to be logged:

  
  2013-12-10 12:41:34.628 ERROR nova.network.manager 
[req-7430295d-9c23-4a46-a755-f1e93aa53c6f 
ListServersNegativeTestXML-tempest-1313593245-user 
ListServersNegativeTestXML-tempest-1313593245-tenant] Unable to release 
10.1.0.10 because vif doesn't exist.

  http://logs.openstack.org/12/61012/2/check/check-tempest-dsvm-
  full/c92bea6/logs/screen-n-net.txt.gz?level=ERROR

  Probably related to https://code.launchpad.net/bugs/968457 which is meant to 
be fixed, but the comment in the nova
  code suggests it might not be.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1259553/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1218582] Re: ServerActionsTestJSON.test_pause_unpause_server fails with a timeout, other failures as a side effect

2013-12-12 Thread Sean Dague
This is actually also a nova bug. Nova didn't move this into a paused
state correctly.

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: tempest
   Importance: Undecided => Medium

** Changed in: tempest
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1218582

Title:
  ServerActionsTestJSON.test_pause_unpause_server fails with a timeout,
  other failures as a side effect

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Confirmed

Bug description:
  In this case, it looks like a test for pausing an instance got stuck
  (reported as running the longest at 250 seconds).  A number of other
  tests failed as a side effect it seems.  They all report that they
  can't do what they needed to do because the instance is still in a
  pausing task state.

  
http://logs.openstack.org/69/42769/7/gate/gate-tempest-devstack-vm-full/b2879b7
  
http://logs.openstack.org/69/42769/7/gate/gate-tempest-devstack-vm-full/b2879b7/console.html
  https://review.openstack.org/#/c/42769/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1218582/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1221899] Re: test_resize_server_from_auto_to_manual: server failed to reach VERIFY_RESIZE status within the required time

2013-12-12 Thread Sean Dague
** Changed in: tempest
   Status: New => Incomplete

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: tempest
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1221899

Title:
  test_resize_server_from_auto_to_manual: server failed to reach
  VERIFY_RESIZE status within the required time

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Invalid

Bug description:
  2013-09-06 19:24:49.608 | Traceback (most recent call last):
  2013-09-06 19:24:49.608 |   File 
"tempest/api/compute/servers/test_disk_config.py", line 114, in 
test_resize_server_from_auto_to_manual
  2013-09-06 19:24:49.609 | 
self.client.wait_for_server_status(server['id'], 'VERIFY_RESIZE')
  2013-09-06 19:24:49.609 |   File 
"tempest/services/compute/xml/servers_client.py", line 331, in 
wait_for_server_status
  2013-09-06 19:24:49.609 | raise exceptions.TimeoutException(message)
  2013-09-06 19:24:49.609 | TimeoutException: Request timed out
  2013-09-06 19:24:49.609 | Details: Server 
dabbdc8d-3194-4e88-bc9c-c897a1fe5f78 failed to reach VERIFY_RESIZE status 
within the required time (400 s). Current status: RESIZE.

  
  
http://logs.openstack.org/48/45248/2/check/gate-tempest-devstack-vm-full/66d555c/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1221899/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257070] Re: test_glance_timeout flakey fail

2013-12-12 Thread Attila Fazekas
Glance configured with one worker, is it possible it is too busy because
of another orations?

Do we need to increase the timeout  and/or the workers ?

** Changed in: tempest
   Status: New => Incomplete

** Also affects: glance
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1257070

Title:
  test_glance_timeout flakey fail

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in Tempest:
  Incomplete

Bug description:
  Transient fail for
  tempest.cli.simple_read_only.test_glance.SimpleReadOnlyGlanceClientTest
  test_glance_timeout

  http://logs.openstack.org/66/55766/3/gate/gate-tempest-devstack-vm-
  postgres-full/a807434/testr_results.html.gz

  ft254.6: 
tempest.cli.simple_read_only.test_glance.SimpleReadOnlyGlanceClientTest.test_glance_timeout_StringException:
 Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  2013-11-29 07:57:20,345 running: '/usr/local/bin/glance --os-username admin 
--os-tenant-name admin --os-password secret --os-auth-url 
http://127.0.0.1:5000/v2.0/  --timeout 15 image-list '
  2013-11-29 07:57:37,633 output of /usr/local/bin/glance --os-username admin 
--os-tenant-name admin --os-password secret --os-auth-url 
http://127.0.0.1:5000/v2.0/  --timeout 15 image-list :

  2013-11-29 07:57:37,635 error output of /usr/local/bin/glance --os-username 
admin --os-tenant-name admin --os-password secret --os-auth-url 
http://127.0.0.1:5000/v2.0/  --timeout 15 image-list :
  Error communicating with http://127.0.0.1:9292 timed out
  }}}

  Traceback (most recent call last):
File "tempest/cli/simple_read_only/test_glance.py", line 89, in 
test_glance_timeout
  self.glance('image-list', flags='--timeout %d' % CONF.cli.timeout)
File "tempest/cli/__init__.py", line 81, in glance
  'glance', action, flags, params, admin, fail_ok)
File "tempest/cli/__init__.py", line 110, in cmd_with_auth
  return self.cmd(cmd, action, flags, params, fail_ok)
File "tempest/cli/__init__.py", line 132, in cmd
  stderr=result_err)
  CommandFailed: Command '['/usr/local/bin/glance', '--os-username', 'admin', 
'--os-tenant-name', 'admin', '--os-password', 'secret', '--os-auth-url', 
'http://127.0.0.1:5000/v2.0/', '--timeout', '15', 'image-list']' returned 
non-zero exit status 1

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1257070/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1213215] Re: ServerRescueTest tearDownClass fails with volume status being in-use

2013-12-12 Thread Sean Dague
** Also affects: cinder
   Importance: Undecided
   Status: New

** Changed in: cinder
   Importance: Undecided => High

** Changed in: nova
   Importance: Undecided => High

** Changed in: cinder
   Status: New => Confirmed

** Changed in: nova
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1213215

Title:
  ServerRescueTest tearDownClass fails with volume status being in-use

Status in Cinder:
  Confirmed
Status in OpenStack Compute (Nova):
  Confirmed
Status in Tempest:
  Confirmed

Bug description:
  Occasionally running tempest in parallel will fail several tests with
  timeout errors. The only nontimeout failure message is that the
  ServerRescueTest failed to delete a volume because it was still marked
  as in use. My guess is that the leftover volume is somehow interfering
  with the other tests causing them to timeout. But, I haven't looked at
  the logs in detail so it's just a wild guess.

  
  2013-08-16 14:11:42.074 | 
==
  2013-08-16 14:11:42.075 | FAIL: 
tempest.api.compute.servers.test_disk_config.ServerDiskConfigTestJSON.test_rebuild_server_with_auto_disk_config[gate]
  2013-08-16 14:11:42.075 | 
tempest.api.compute.servers.test_disk_config.ServerDiskConfigTestJSON.test_rebuild_server_with_auto_disk_config[gate]
  2013-08-16 14:11:42.075 | 
--
  2013-08-16 14:11:42.075 | _StringException: Empty attachments:
  2013-08-16 14:11:42.075 |   stderr
  2013-08-16 14:11:42.076 |   stdout
  2013-08-16 14:11:42.076 | 
  2013-08-16 14:11:42.076 | Traceback (most recent call last):
  2013-08-16 14:11:42.076 |   File 
"tempest/api/compute/servers/test_disk_config.py", line 64, in 
test_rebuild_server_with_auto_disk_config
  2013-08-16 14:11:42.076 | wait_until='ACTIVE')
  2013-08-16 14:11:42.076 |   File "tempest/api/compute/base.py", line 140, in 
create_server
  2013-08-16 14:11:42.076 | server['id'], kwargs['wait_until'])
  2013-08-16 14:11:42.077 |   File 
"tempest/services/compute/json/servers_client.py", line 160, in 
wait_for_server_status
  2013-08-16 14:11:42.077 | time.sleep(self.build_interval)
  2013-08-16 14:11:42.077 |   File 
"/usr/local/lib/python2.7/dist-packages/fixtures/_fixtures/timeout.py", line 
52, in signal_handler
  2013-08-16 14:11:42.077 | raise TimeoutException()
  2013-08-16 14:11:42.077 | TimeoutException
  2013-08-16 14:11:42.077 | 
  2013-08-16 14:11:42.077 | 
  2013-08-16 14:11:42.078 | 
==
  2013-08-16 14:11:42.078 | FAIL: setUpClass 
(tempest.api.compute.images.test_image_metadata.ImagesMetadataTestXML)
  2013-08-16 14:11:42.078 | setUpClass 
(tempest.api.compute.images.test_image_metadata.ImagesMetadataTestXML)
  2013-08-16 14:11:42.078 | 
--
  2013-08-16 14:11:42.078 | _StringException: Traceback (most recent call last):
  2013-08-16 14:11:42.078 |   File 
"tempest/api/compute/images/test_image_metadata.py", line 46, in setUpClass
  2013-08-16 14:11:42.078 | cls.client.wait_for_image_status(cls.image_id, 
'ACTIVE')
  2013-08-16 14:11:42.079 |   File 
"tempest/services/compute/xml/images_client.py", line 167, in 
wait_for_image_status
  2013-08-16 14:11:42.079 | raise exceptions.TimeoutException
  2013-08-16 14:11:42.079 | TimeoutException: Request timed out
  2013-08-16 14:11:42.079 | 
  2013-08-16 14:11:42.079 | 
  2013-08-16 14:11:42.079 | 
==
  2013-08-16 14:11:42.079 | FAIL: 
tempest.api.compute.servers.test_server_rescue.ServerRescueTestJSON.test_rescued_vm_detach_volume[gate,negative]
  2013-08-16 14:11:42.080 | 
tempest.api.compute.servers.test_server_rescue.ServerRescueTestJSON.test_rescued_vm_detach_volume[gate,negative]
  2013-08-16 14:11:42.080 | 
--
  2013-08-16 14:11:42.080 | _StringException: Empty attachments:
  2013-08-16 14:11:42.080 |   stderr
  2013-08-16 14:11:42.080 |   stdout
  2013-08-16 14:11:42.080 | 
  2013-08-16 14:11:42.081 | Traceback (most recent call last):
  2013-08-16 14:11:42.081 |   File 
"tempest/api/compute/servers/test_server_rescue.py", line 184, in 
test_rescued_vm_detach_volume
  2013-08-16 14:11:42.081 | 
self.servers_client.wait_for_server_status(self.server_id, 'RESCUE')
  2013-08-16 14:11:42.081 |   File 
"tempest/services/compute/json/servers_client.py", line 160, in 
wait_for_server_status
  2013-08-16 14:11:42.081 | time.sleep(self.build_interval)
  2013-08-16 14:11:42.081 |   File 
"/usr/local/lib/python2.7/dist-packages/fixtures/_fixtures/timeout.py", line 
52, in signal_handler
  2013-08-16 14:11:42.081 | raise Timeo

[Yahoo-eng-team] [Bug 1251920] Re: Tempest failures due to failure to return console logs from an instance

2013-12-12 Thread Adalberto Medeiros
Since this is not hitting on tempest anymore, moving to Fix Released

** Changed in: tempest
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1251920

Title:
  Tempest failures due to failure to return console logs from an
  instance

Status in OpenStack Compute (Nova):
  Invalid
Status in OpenStack Compute (nova) havana series:
  Fix Committed
Status in Tempest:
  Fix Released

Bug description:
  Logstash search:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJmaWxlbmFtZTpjb25zb2xlLmh0bWwgQU5EIG1lc3NhZ2U6XCJhc3NlcnRpb25lcnJvcjogY29uc29sZSBvdXRwdXQgd2FzIGVtcHR5XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzODQ2NDEwNzIxODl9

  An example failure is http://logs.openstack.org/92/55492/8/check
  /check-tempest-devstack-vm-full/ef3a4a4/console.html

  console.html
  ===

  2013-11-16 21:54:27.998 | 2013-11-16 21:41:20,775 Request: POST 
http://127.0.0.1:8774/v2/3f6934d9aabf467aa8bc51397ccfa782/servers/10aace14-23c1-4cec-9bfd-2c873df1fbee/action
  2013-11-16 21:54:27.998 | 2013-11-16 21:41:20,776 Request Headers: 
{'Content-Type': 'application/json', 'Accept': 'application/json', 
'X-Auth-Token': ''}
  2013-11-16 21:54:27.998 | 2013-11-16 21:41:20,776 Request Body: 
{"os-getConsoleOutput": {"length": 10}}
  2013-11-16 21:54:27.998 | 2013-11-16 21:41:21,000 Response Status: 200
  2013-11-16 21:54:27.999 | 2013-11-16 21:41:21,001 Nova request id: 
req-7a2ee0ab-c977-4957-abb5-1d84191bf30c
  2013-11-16 21:54:27.999 | 2013-11-16 21:41:21,001 Response Headers: 
{'content-length': '14', 'date': 'Sat, 16 Nov 2013 21:41:20 GMT', 
'content-type': 'application/json', 'connection': 'close'}
  2013-11-16 21:54:27.999 | 2013-11-16 21:41:21,001 Response Body: {"output": 
""}
  2013-11-16 21:54:27.999 | }}}
  2013-11-16 21:54:27.999 | 
  2013-11-16 21:54:27.999 | Traceback (most recent call last):
  2013-11-16 21:54:27.999 |   File 
"tempest/api/compute/servers/test_server_actions.py", line 281, in 
test_get_console_output
  2013-11-16 21:54:28.000 | self.wait_for(get_output)
  2013-11-16 21:54:28.000 |   File "tempest/api/compute/base.py", line 133, in 
wait_for
  2013-11-16 21:54:28.000 | condition()
  2013-11-16 21:54:28.000 |   File 
"tempest/api/compute/servers/test_server_actions.py", line 278, in get_output
  2013-11-16 21:54:28.000 | self.assertTrue(output, "Console output was 
empty.")
  2013-11-16 21:54:28.000 |   File "/usr/lib/python2.7/unittest/case.py", line 
420, in assertTrue
  2013-11-16 21:54:28.000 | raise self.failureException(msg)
  2013-11-16 21:54:28.001 | AssertionError: Console output was empty.

  n-api
  

  2013-11-16 21:41:20.782 DEBUG nova.api.openstack.wsgi 
[req-7a2ee0ab-c977-4957-abb5-1d84191bf30c 
ServerActionsTestJSON-tempest-2102529866-user 
ServerActionsTestJSON-tempest-2102529866-tenant] Action: 'action', body: 
{"os-getConsoleOutput": {"length": 10}} _process_stack 
/opt/stack/new/nova/nova/api/openstack/wsgi.py:963
  2013-11-16 21:41:20.782 DEBUG nova.api.openstack.wsgi 
[req-7a2ee0ab-c977-4957-abb5-1d84191bf30c 
ServerActionsTestJSON-tempest-2102529866-user 
ServerActionsTestJSON-tempest-2102529866-tenant] Calling method > _process_stack 
/opt/stack/new/nova/nova/api/openstack/wsgi.py:964
  2013-11-16 21:41:20.865 DEBUG nova.openstack.common.rpc.amqp 
[req-7a2ee0ab-c977-4957-abb5-1d84191bf30c 
ServerActionsTestJSON-tempest-2102529866-user 
ServerActionsTestJSON-tempest-2102529866-tenant] Making synchronous call on 
compute.devstack-precise-hpcloud-az2-663635 ... multicall 
/opt/stack/new/nova/nova/openstack/common/rpc/amqp.py:553
  2013-11-16 21:41:20.866 DEBUG nova.openstack.common.rpc.amqp 
[req-7a2ee0ab-c977-4957-abb5-1d84191bf30c 
ServerActionsTestJSON-tempest-2102529866-user 
ServerActionsTestJSON-tempest-2102529866-tenant] MSG_ID is 
a93dceabf6a441eb850b5fbb012d661f multicall 
/opt/stack/new/nova/nova/openstack/common/rpc/amqp.py:556
  2013-11-16 21:41:20.866 DEBUG nova.openstack.common.rpc.amqp 
[req-7a2ee0ab-c977-4957-abb5-1d84191bf30c 
ServerActionsTestJSON-tempest-2102529866-user 
ServerActionsTestJSON-tempest-2102529866-tenant] UNIQUE_ID is 
706ab69dc066440fbe1bd7766b73d953. _add_unique_id 
/opt/stack/new/nova/nova/openstack/common/rpc/amqp.py:341
  2013-11-16 21:41:20.869 22679 DEBUG amqp [-] Closed channel #1 _do_close 
/usr/local/lib/python2.7/dist-packages/amqp/channel.py:95
  2013-11-16 21:41:20.869 22679 DEBUG amqp [-] using channel_id: 1 __init__ 
/usr/local/lib/python2.7/dist-packages/amqp/channel.py:71
  2013-11-16 21:41:20.870 22679 DEBUG amqp [-] Channel open _open_ok 
/usr/local/lib/python2.7/dist-packages/amqp/channel.py:429
  2013-11-16 21:41:20.999 INFO nova.osapi_compute.wsgi.server 
[req-7a2ee0ab-c977-4957-abb5-1d84191bf30c 
ServerActionsTestJSON-tempest-2102529866-user 
Se

[Yahoo-eng-team] [Bug 1254752] Re: test_volume_boot_pattern: SSH timed out

2013-12-12 Thread Attila Fazekas
Is there any sign of tempest did something incorrectly ?

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: tempest
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1254752

Title:
  test_volume_boot_pattern: SSH timed out

Status in OpenStack Neutron (virtual network service):
  New
Status in Tempest:
  Incomplete

Bug description:
  The test_volume_boot_pattern tests fails sporadically:

  http://logs.openstack.org/97/57797/2/check/check-tempest-devstack-vm-
  neutron/38fdc5a/console.html.gz

  2013-11-22 21:53:07.670 | Traceback (most recent call last):
  2013-11-22 21:53:07.670 |   File 
"tempest/scenario/test_volume_boot_pattern.py", line 156, in 
test_volume_boot_pattern
  2013-11-22 21:53:07.670 | ssh_client = 
self._ssh_to_server(instance_from_snapshot, keypair)
  2013-11-22 21:53:07.670 |   File 
"tempest/scenario/test_volume_boot_pattern.py", line 100, in _ssh_to_server
  2013-11-22 21:53:07.670 | private_key=keypair.private_key)
  2013-11-22 21:53:07.670 |   File "tempest/scenario/manager.py", line 475, in 
get_remote_client
  2013-11-22 21:53:07.671 | return RemoteClient(ip, username, 
pkey=private_key)
  2013-11-22 21:53:07.671 |   File 
"tempest/common/utils/linux/remote_client.py", line 47, in __init__
  2013-11-22 21:53:07.671 | if not self.ssh_client.test_connection_auth():
  2013-11-22 21:53:07.671 |   File "tempest/common/ssh.py", line 148, in 
test_connection_auth
  2013-11-22 21:53:07.671 | connection = self._get_ssh_connection()
  2013-11-22 21:53:07.672 |   File "tempest/common/ssh.py", line 76, in 
_get_ssh_connection
  2013-11-22 21:53:07.672 | password=self.password)
  2013-11-22 21:53:07.672 | SSHTimeout: Connection to the 172.24.4.230 via SSH 
timed out.
  2013-11-22 21:53:07.672 | User: cirros, Password: None

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1254752/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260333] [NEW] Malformed property protection rules return error to end user

2013-12-12 Thread Thomas Leaman
Public bug reported:

Using a property protections file such as:

[.*]
create = @,! 
read = @
update = @
delete = @

The create operation has an invalid rule, duplicate values are not
allowed. This should probably result in the service refusing to start,
however currently the service will start and operations touching this
value will return:

500 Internal Server Error
Malformed property protection rule 'some_property': '@' and '!' are mutually 
exclusive   (HTTP 500)

to the end user. My feeling is that the end user should not receive any
information about the cause of the error, just the 500 status.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1260333

Title:
  Malformed property protection rules return error to end user

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Using a property protections file such as:

  [.*]
  create = @,! 
  read = @
  update = @
  delete = @

  The create operation has an invalid rule, duplicate values are not
  allowed. This should probably result in the service refusing to start,
  however currently the service will start and operations touching this
  value will return:

  500 Internal Server Error
  Malformed property protection rule 'some_property': '@' and '!' are mutually 
exclusive   (HTTP 500)

  to the end user. My feeling is that the end user should not receive
  any information about the cause of the error, just the 500 status.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1260333/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1132879] Re: server reboot hard and rebuild are flaky in tempest when ssh is enabled

2013-12-12 Thread Adalberto Medeiros
** Changed in: tempest
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1132879

Title:
  server reboot hard and rebuild are flaky in tempest when ssh is
  enabled

Status in OpenStack Compute (Nova):
  Confirmed
Status in Tempest:
  Invalid

Bug description:
  Working on enabling back ssh access to VMs in tempest tests:

  https://review.openstack.org/#/c/22415/
  https://blueprints.launchpad.net/tempest/+spec/ssh-auth-strategy

  On the gate devstack with nova networking the hard reboot and rebuild
  test are sometimes passing and sometimes not.

  On the gate devstack with quantum networking the hard reboot and
  rebuild tests are systematically not passing, and blocking the overall
  blueprint implementation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1132879/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1213209] Re: test_list_image_filters.py setUpClass created image never becomes active

2013-12-12 Thread Sean Dague
*** This bug is a duplicate of bug 1258635 ***
https://bugs.launchpad.net/bugs/1258635

** This bug has been marked a duplicate of bug 1258635
   Race with changing image status when snapshotting

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1213209

Title:
  test_list_image_filters.py setUpClass created image never becomes
  active

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Confirmed

Bug description:
  When running in parallel occassionally the tests in
  test_list_image_filters fail while waiting for one of the created
  images to become active. See the logs here:

  http://logs.openstack.org/42/40342/2/gate/gate-tempest-devstack-vm-
  testr-full/dad876b/

  
  From the tempest log:

  2013-08-16 05:51:36.930 368 ERROR 
tempest.api.compute.images.test_list_image_filters [-] Request timed out
  2013-08-16 05:51:36.930 368 TRACE 
tempest.api.compute.images.test_list_image_filters Traceback (most recent call 
last):
  2013-08-16 05:51:36.930 368 TRACE 
tempest.api.compute.images.test_list_image_filters   File 
"tempest/api/compute/images/test_list_image_filters.py", line 67, in setUpClass
  2013-08-16 05:51:36.930 368 TRACE 
tempest.api.compute.images.test_list_image_filters 
cls.client.wait_for_image_status(cls.image2_id, 'ACTIVE')
  2013-08-16 05:51:36.930 368 TRACE 
tempest.api.compute.images.test_list_image_filters   File 
"tempest/services/compute/json/images_client.py", line 110, in 
wait_for_image_status
  2013-08-16 05:51:36.930 368 TRACE 
tempest.api.compute.images.test_list_image_filters raise 
exceptions.TimeoutException
  2013-08-16 05:51:36.930 368 TRACE 
tempest.api.compute.images.test_list_image_filters TimeoutException: Request 
timed out
  2013-08-16 05:51:36.930 368 TRACE 
tempest.api.compute.images.test_list_image_filters

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1213209/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1213212] Re: test_resize_server_confirm server failed to build

2013-12-12 Thread Sean Dague
removing this as a tempest issue, as I don't think it actually is a bug
in tempest, it's a nova state bug

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => Medium

** No longer affects: tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1213212

Title:
  test_resize_server_confirm server failed to build

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  When running tempest in parallel occasionally
  test_resize_server_confirm fails to build the server and goes into an
  error state see:

  2013-08-16 14:08:33.607 | 
==
  2013-08-16 14:08:33.607 | FAIL: 
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_resize_server_confirm[gate,smoke]
  2013-08-16 14:08:33.607 | 
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_resize_server_confirm[gate,smoke]
  2013-08-16 14:08:33.608 | 
--
  2013-08-16 14:08:33.608 | _StringException: Empty attachments:
  2013-08-16 14:08:33.608 |   stderr
  2013-08-16 14:08:33.609 |   stdout
  2013-08-16 14:08:33.609 | 
  2013-08-16 14:08:33.609 | Traceback (most recent call last):
  2013-08-16 14:08:33.609 |   File 
"tempest/api/compute/servers/test_server_actions.py", line 161, in 
test_resize_server_confirm
  2013-08-16 14:08:33.609 | 
self.client.wait_for_server_status(self.server_id, 'VERIFY_RESIZE')
  2013-08-16 14:08:33.609 |   File 
"tempest/services/compute/json/servers_client.py", line 165, in 
wait_for_server_status
  2013-08-16 14:08:33.609 | raise 
exceptions.BuildErrorException(server_id=server_id)
  2013-08-16 14:08:33.610 | BuildErrorException: Server 
ed3c7212-f4b6-4365-91b8-bc9e1a60 failed to build and is in ERROR status
  2013-08-16 14:08:33.610 | 
  2013-08-16 14:08:33.610 | 
  2013-08-16 14:08:33.611 | 
==

  A set of logs for this failure can be found here:
  
http://logs.openstack.org/63/42063/1/gate/gate-tempest-devstack-vm-testr-full/fa32f42/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1213212/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1124674] Re: add-fixed-ip causes traceback

2013-12-12 Thread Sean Dague
** Changed in: tempest
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1124674

Title:
  add-fixed-ip causes traceback

Status in OpenStack Compute (Nova):
  Fix Released
Status in Tempest:
  Invalid

Bug description:
  nova add-fixed-ip leads to a traceback in nova-network
  2013-02-13 16:10:57.711 ERROR nova.openstack.common.rpc.amqp 
[req-434b8722-566c-4345-933a-5646277cd6ef demo demo] Exception during message 
handling2013-02-13 16:10:57.711 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):2013-02-13 16:10:57.711 TRACE 
nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/openstack/common/rpc/amqp.py", line 276, in 
_process_data2013-02-13 16:10:57.711 TRACE nova.openstack.common.rpc.amqp 
rval = self.proxy.dispatch(ctxt, version, method, **args)2013-02-13 
16:10:57.711 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/openstack/common/rpc/dispatcher.py", line 133, in 
dispatch2013-02-13 16:10:57.711 TRACE nova.openstack.common.rpc.amqp return 
getattr(proxyobj, method)(ctxt, **kwargs)2013-02-13 16:10:57.711 TRACE 
nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/network/manager.py", line 756, in 
add_fixed_ip_to_instance2013-02-13 16:10:57.711 TRACE 
nova.openstack.common.rpc.amqp self._alloc
 ate_fixed_ips(context, instance_id, host, [network])2013-02-13 16:10:57.711 
TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/network/manager.py", line 212, in _allocate_fixed_ips
  2013-02-13 16:10:57.711 TRACE nova.openstack.common.rpc.amqp vpn=vpn, 
address=address)
  2013-02-13 16:10:57.711 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/network/manager.py", line 801, in allocate_fixed_ip
  2013-02-13 16:10:57.711 TRACE nova.openstack.common.rpc.amqp instance_ref 
= self.db.instance_get(context, instance_id)
  2013-02-13 16:10:57.711 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/db/api.py", line 593, in instance_get
  2013-02-13 16:10:57.711 TRACE nova.openstack.common.rpc.amqp return 
IMPL.instance_get(context, instance_id)
  2013-02-13 16:10:57.711 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/db/sqlalchemy/api.py", line 137, in wrapper
  2013-02-13 16:10:57.711 TRACE nova.openstack.common.rpc.amqp return 
f(*args, **kwargs)
  2013-02-13 16:10:57.711 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/db/sqlalchemy/api.py", line 1520, in instance_get
  2013-02-13 16:10:57.711 TRACE nova.openstack.common.rpc.amqp raise 
exception.InstanceNotFound(instance_id=instance_id)
  2013-02-13 16:10:57.711 TRACE nova.openstack.common.rpc.amqp 
InstanceNotFound: Instance f96d5c44-6c17-46cb-8b5c-057717ce076d could not be 
found.

  there seems to be quite a bit of mixing of instance['id'] with
  instance['uuid']

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1124674/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244055] Re: six has no attribute 'add_metaclass'

2013-12-12 Thread Sean Dague
** Changed in: tempest
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1244055

Title:
  six has no attribute 'add_metaclass'

Status in OpenStack Compute (Nova):
  Invalid
Status in Tempest:
  Fix Released

Bug description:
  I have a patch failing in gate with traces containing the following:

  2013-10-23 22:27:54.336 |   File 
"/opt/stack/new/python-novaclient/novaclient/base.py", line 166, in 
  2013-10-23 22:27:54.336 | @six.add_metaclass(abc.ABCMeta)
  2013-10-23 22:27:54.337 | AttributeError: 'module' object has no attribute 
'add_metaclass'

  For full logs, see the failing patch:
  https://review.openstack.org/#/c/52876/

  It looks like this was caused by this recent commit:
  https://review.openstack.org/#/c/52255/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1244055/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1195473] Re: Need some docs about Glance notifications with examples

2013-12-12 Thread Brian Rosmaita
** No longer affects: glance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1195473

Title:
  Need some docs about Glance notifications with examples

Status in OpenStack Manuals:
  Confirmed

Bug description:
  The notifications emitted by Glance were enhanced for Grizzly.  The
  developer docs have been updated, but we could probably use a listing
  of what notifications are emitted and what they look like for the
  operator manual.

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-manuals/+bug/1195473/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1182883] Re: List servers matching a regex fails with Quantum

2013-12-12 Thread Sean Dague
** Changed in: tempest
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1182883

Title:
  List servers matching a regex fails with Quantum

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in OpenStack Compute (Nova):
  Invalid
Status in Tempest:
  Invalid

Bug description:
  The test
  
tempest.api.compute.servers.test_list_server_filters:ListServerFiltersTestXML.test_list_servers_filtered_by_ip_regex
  tries to search a server with only a fragment of its IP (GET
  http://XX/v2/$Tenant/servers?ip=10.0.) which calls the following
  Quantum request :
  http://XX/v2.0/ports.json?fixed_ips=ip_address%3D10.0. But it seems
  this regex search is not supporter by Quantum. Thus the tempest test
  fauls.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1182883/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260138] Re: VMWARE: Unable to spawn instances from sparse/ide images

2013-12-12 Thread Abhishek Chanda
Duplicate of #1260139

** Changed in: nova
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260138

Title:
  VMWARE: Unable to spawn instances from sparse/ide images

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  Branch: stable/havana

  Traceback: http://paste.openstack.org/show/54855/

  Steps to reprodude:
  Upload a ide/sparse type image to glance.
  Spawn an instance from that image

  Actual Result:
  Failed to spawn an image

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260138/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257460] Re: test_rescued_vm_detach_volume Volume test_detach failed to reach in-use status within the required time (196 s).

2013-12-12 Thread Attila Fazekas
Is tempest did anything incorrectly ?

** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: cinder
   Importance: Undecided
   Status: New

** Changed in: tempest
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1257460

Title:
  test_rescued_vm_detach_volume Volume test_detach failed to reach in-
  use status within the required time (196 s).

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Incomplete

Bug description:
  http://logs.openstack.org/76/59576/3/check/check-tempest-dsvm-
  postgres-full/af0e4e2/console.html

  
  2013-12-03 01:57:24.699 | 
==
  2013-12-03 01:57:24.699 | FAIL: 
tempest.api.compute.servers.test_server_rescue.ServerRescueTestJSON.test_rescued_vm_detach_volume[gate,negative]
  2013-12-03 01:57:24.699 | 
tempest.api.compute.servers.test_server_rescue.ServerRescueTestJSON.test_rescued_vm_detach_volume[gate,negative]
  2013-12-03 01:57:24.699 | 
--
  2013-12-03 01:57:24.700 | _StringException: Empty attachments:
  2013-12-03 01:57:24.700 |   stderr
  2013-12-03 01:57:24.700 |   stdout
  2013-12-03 01:57:24.701 | 
  2013-12-03 01:57:24.701 | pythonlogging:'': {{{
  2013-12-03 01:57:24.701 | 2013-12-03 01:34:31,546 Request: POST 
http://127.0.0.1:8774/v2/742879cfcd384dffb721c692c81376be/servers/ab993087-a194-4798-ac63-3b5495690fb6/os-volume_attachments
  2013-12-03 01:57:24.701 | 2013-12-03 01:34:31,546 Request Headers: 
{'Content-Type': 'application/json', 'Accept': 'application/json', 
'X-Auth-Token': ''}
  2013-12-03 01:57:24.701 | 2013-12-03 01:34:31,546 Request Body: 
{"volumeAttachment": {"device": "/dev/vdf", "volumeId": 
"67bc3c74-1d66-4107-9d6d-f66b6a678e52"}}
  2013-12-03 01:57:24.701 | 2013-12-03 01:34:33,730 Response Status: 200
  2013-12-03 01:57:24.702 | 2013-12-03 01:34:33,730 Nova request id: 
req-4314d02e-0522-4410-9e6c-909528fb2314
  2013-12-03 01:57:24.702 | 2013-12-03 01:34:33,731 Response Headers: 
{'content-length': '194', 'date': 'Tue, 03 Dec 2013 01:34:33 GMT', 
'content-type': 'application/json', 'connection': 'close'}
  2013-12-03 01:57:24.702 | 2013-12-03 01:34:33,731 Response Body: 
{"volumeAttachment": {"device": "/dev/vdf", "serverId": 
"ab993087-a194-4798-ac63-3b5495690fb6", "id": 
"67bc3c74-1d66-4107-9d6d-f66b6a678e52", "volumeId": 
"67bc3c74-1d66-4107-9d6d-f66b6a678e52"}}
  2013-12-03 01:57:24.702 | 2013-12-03 01:34:33,731 Request: GET 
http://127.0.0.1:8774/v2/742879cfcd384dffb721c692c81376be/os-volumes/67bc3c74-1d66-4107-9d6d-f66b6a678e52
  2013-12-03 01:57:24.702 | 2013-12-03 01:34:33,732 Request Headers: 
{'X-Auth-Token': ''}
  2013-12-03 01:57:24.702 | 2013-12-03 01:34:34,025 Response Status: 200
  2013-12-03 01:57:24.702 | 2013-12-03 01:34:34,026 Nova request id: 
req-4a9aac2a-55cb-4287-8f4c-392d3498a304

  

  2013-12-03 01:57:24.871 | Traceback (most recent call last):
  2013-12-03 01:57:24.871 |   File 
"tempest/api/compute/servers/test_server_rescue.py", line 173, in 
test_rescued_vm_detach_volume
  2013-12-03 01:57:24.871 | self.volume_to_detach['id'], 'in-use')
  2013-12-03 01:57:24.872 |   File 
"tempest/services/compute/json/volumes_extensions_client.py", line 104, in 
wait_for_volume_status
  2013-12-03 01:57:24.872 | raise exceptions.TimeoutException(message)
  2013-12-03 01:57:24.872 | TimeoutException: Request timed out
  2013-12-03 01:57:24.872 | Details: Volume test_detach failed to reach in-use 
status within the required time (196 s).

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1257460/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >