[Yahoo-eng-team] [Bug 1362966] [NEW] IPv6 two attributes cannot be set to None

2014-08-29 Thread Akihiro Motoki
Public bug reported:

The default value of IPv6 RA and address modes is None (if they are not 
specified when the subnet is created).
However, we cannot change IPv6 two modes to None from other values after 
creating a subnet.
(ra_mode address_mode) = (None, None) is a valid combinaiton, but for example 
we cannot change them from (slaac, slaac) to (none, none).

IMO IPv6 two modes should accept None in API to allow users to reset the
attribute value to None.

ubuntu@dev02:~/neutron (master)$ neutron subnet-show 
4ab34962-b330-4be5-98fe-ac7862f8d511
+---+---+
| Field | Value 
|
+---+---+
| allocation_pools  | {"start": "fe80:::2", "end": 
"fe80::ff:::::fffe"} |
| cidr  | fe80:::/40
|
| dns_nameservers   |   
|
| enable_dhcp   | True  
|
| gateway_ip| fe80:::1  
|
| host_routes   |   
|
| id| 4ab34962-b330-4be5-98fe-ac7862f8d511  
|
| ip_version| 6 
|
| ipv6_address_mode | slaac 
|
| ipv6_ra_mode  | slaac 
|
| name  |   
|
| network_id| 07315dce-0c6c-4c2f-99ec-e8575ffa72af  
|
| tenant_id | 36c29390faa8408cb9deff8762319740  
|
+---+---+

ubuntu@dev02:~/neutron (master)$ neutron subnet-update 
4ab34962-b330-4be5-98fe-ac7862f8d511 --ipv6_ra_mode action=clear 
--ipv6_address_mode action=clear
Invalid input for ipv6_ra_mode. Reason: 'None' is not in ['dhcpv6-stateful', 
'dhcpv6-stateless', 'slaac']. (HTTP 400) (Request-ID: 
req-9431df59-3881-4c85-861e-b25217b8013d)

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ipv6

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362966

Title:
  IPv6 two attributes cannot be set to None

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The default value of IPv6 RA and address modes is None (if they are not 
specified when the subnet is created).
  However, we cannot change IPv6 two modes to None from other values after 
creating a subnet.
  (ra_mode address_mode) = (None, None) is a valid combinaiton, but for example 
we cannot change them from (slaac, slaac) to (none, none).

  IMO IPv6 two modes should accept None in API to allow users to reset
  the attribute value to None.

  ubuntu@dev02:~/neutron (master)$ neutron subnet-show 
4ab34962-b330-4be5-98fe-ac7862f8d511
  
+---+---+
  | Field | Value   
  |
  
+---+---+
  | allocation_pools  | {"start": "fe80:::2", "end": 
"fe80::ff:::::fffe"} |
  | cidr  | fe80:::/40  
  |
  | dns_nameservers   | 
  |
  | enable_dhcp   | True
  |
  | gateway_ip| fe80:::1
  |
  | host_routes   | 
  |
  | id| 4ab34962-b330-4be5-98fe-ac7862f8d511
  |
  | ip_version| 6   
  |
  | ipv6_address_mode | slaac   
  |
  | ipv6_ra_mode  | slaac   
  |
  | name  | 
  |
  | network_id| 07315dce-0c6c-4c2f-99ec-e8575ffa72af
 

[Yahoo-eng-team] [Bug 1360081] Re: Unable to list nova instances using "IP" regex filter

2014-08-29 Thread Ghanshyam Mann
*** This bug is a duplicate of bug 1182883 ***
https://bugs.launchpad.net/bugs/1182883

Actually neutron does not have regex-based queries feature for ip as
Nova-network does. Its related to neutron. They have in their plan for
the same.

** Changed in: python-novaclient
   Status: New => Invalid

** Also affects: neutron
   Importance: Undecided
   Status: New

** This bug has been marked a duplicate of bug 1182883
   List servers matching a regex fails with Quantum

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1360081

Title:
  Unable to list nova instances using "IP" regex filter

Status in OpenStack Neutron (virtual network service):
  New
Status in Python client library for Nova:
  Invalid

Bug description:
  CLI guide of nova list tells that instances can be listed out with IP
  regex filter  -

  [raies@localhost devstack]$ nova help list
  usage: nova list [--reservation-id ] [--ip ]
   [--ip6 ] [--name ]
   [--instance-name ] [--status ]
   [--flavor ] [--image ] [--host ]
   [--all-tenants [<0|1>]] [--tenant []] [--deleted]
   [--fields ] [--minimal]

  List active servers.

  Optional arguments:
    --reservation-id 
  Only return servers that match reservation-id.
    --ip   Search with regular expression match by IP
  address.
    --ip6 Search with regular expression match by IPv6
  address.
    --name   Search with regular expression match by name
    --instance-name 
  Search with regular expression match by server
  name.
    --status  Search by server status
    --flavor  Search by flavor name or ID
    --imageSearch by image name or ID
    --host  Search servers by hostname to which they are
  assigned (Admin only).
    --all-tenants [<0|1>] Display information from all tenants (Admin
  only).
    --tenant []   Display information from single tenant (Admin
  only).
    --deleted Only display deleted servers (Admin only).
    --fields  Comma-separated list of fields to display. Use
  the show command to see which fields are
  available.
    --minimal Get only uuid and name.

  I am able to filter instances using their name regex but I am unable to 
filter with IP regex.
  When I use complete IP only then IP instance of that IP can be listed but 
regex of IP does not work.

  Steps performed by me are as -

  1.

  [raies@localhost devstack]$ nova list
  
+--+-+++-++
  | ID   | Name| Status | Task State | 
Power State | Networks   |
  
+--+-+++-++
  | 02cbf054-5813-4f3f-80a9-fb5cfeb0a494 | test-server | ACTIVE | -  | 
Running | public=172.24.4.18 |
  
+--+-+++-++

  2.

  [raies@localhost devstack]$ nova list --name test
  
+--+-+++-++
  | ID   | Name| Status | Task State | 
Power State | Networks   |
  
+--+-+++-++
  | 02cbf054-5813-4f3f-80a9-fb5cfeb0a494 | test-server | ACTIVE | -  | 
Running | public=172.24.4.18 |
  
+--+-+++-++

  3.
  [raies@localhost devstack]$ nova list --name "test"
  
+--+-+++-++
  | ID   | Name| Status | Task State | 
Power State | Networks   |
  
+--+-+++-++
  | 02cbf054-5813-4f3f-80a9-fb5cfeb0a494 | test-server | ACTIVE | -  | 
Running | public=172.24.4.18 |
  
+--+-+++-++

  4.
  [raies@localhost devstack]$ nova list --name "test*"
  
+--+-+++-+-

[Yahoo-eng-team] [Bug 1363014] [NEW] NoopQuotasDriver.get_settable_quotas() method always fail with KeyError

2014-08-29 Thread Roman Podoliaka
Public bug reported:

NoopQuotasDriver.get_settable_quotas() tries to call update() on non-
existing dictionary entry. While NoopQuotasDriver is not really useful,
we still want it to be working.

** Affects: nova
 Importance: Undecided
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1363014

Title:
  NoopQuotasDriver.get_settable_quotas() method always fail with
  KeyError

Status in OpenStack Compute (Nova):
  New

Bug description:
  NoopQuotasDriver.get_settable_quotas() tries to call update() on non-
  existing dictionary entry. While NoopQuotasDriver is not really
  useful, we still want it to be working.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1363014/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363019] [NEW] test_versions.py is currently breaking pep8 in master

2014-08-29 Thread Henry Nash
Public bug reported:

Somehow a set of bad aligned '}' has got into master in
test_versions.py, which is causing every patch to fail.  This fixes it.

** Affects: keystone
 Importance: Critical
 Assignee: Henry Nash (henry-nash)
 Status: In Progress

** Changed in: keystone
   Importance: Undecided => Critical

** Changed in: keystone
 Assignee: (unassigned) => Henry Nash (henry-nash)

** Changed in: keystone
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1363019

Title:
  test_versions.py is currently breaking pep8 in master

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  Somehow a set of bad aligned '}' has got into master in
  test_versions.py, which is causing every patch to fail.  This fixes
  it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1363019/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363037] [NEW] neutron allows creating IPV6 floating ip

2014-08-29 Thread Kirill Shileev
Public bug reported:

Despite of https://bugs.launchpad.net/neutron/+bug/1323766, neutron creates 
floating IPV6 and associates
it to the tenant network IPV6.

To reproduce:
neutron floatingip-list
+--+--+-+-+
| id   | fixed_ip_address | floating_ip_address 
| port_id |
+--+--+-+-+
| ba8f4339-dbb2-4678-a111-1e2d7f63a0b9 |  | 2005::3 
| |
+--+--+-+-+
797:devstack> neutron port-list
+--+--+---+--+
| id   | name | mac_address   | fixed_ips   

 |
+--+--+---+--+
| 0bcd1967-e462-4947-a9b3-3b0e1bb159f8 |  | fa:16:3e:93:29:ba | 
{"subnet_id": "9a5a6e03-255f-42db-8b56-955d50ec973b", "ip_address": "2005::2"}  
 |
| 1f7eaac2-c5e6-481a-b98e-ace276fa581a |  | fa:16:3e:aa:20:db | 
{"subnet_id": "62f2f8e9-e997-47e1-b5c6-5fa888b270e9", "ip_address": "feee::2"}  
 |
| 713f57e8-df16-49bf-97fd-b41bd505de10 |  | fa:16:3e:40:78:2e | 
{"subnet_id": "62f2f8e9-e997-47e1-b5c6-5fa888b270e9", "ip_address": "feee::1"}  
 |
| d57165ea-f0a8-4d13-9fbf-5a2872887b38 |  | fa:16:3e:c9:b6:df | 
{"subnet_id": "9a5a6e03-255f-42db-8b56-955d50ec973b", "ip_address": "2005::3"}  
 |
| d6c391f1-c199-4846-aebd-a948877f7f57 |  | fa:16:3e:d4:1f:b4 | 
{"subnet_id": "62f2f8e9-e997-47e1-b5c6-5fa888b270e9", "ip_address": 
"feee::f816:3eff:fed4:1fb4"} |
+--+--+---+--+
798:devstack> neutron floatingip-associate ba8f4339-dbb2-4678-a111-1e2d7f63a0b9 
d6c391f1-c199-4846-aebd-a948877f7f57
Associated floating IP ba8f4339-dbb2-4678-a111-1e2d7f63a0b9

Expected behavior:  fail association and print the error message

Moreover,  it's possible to create floating IPv4 and associate it to
tenant IPV6

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ipv6

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1363037

Title:
  neutron allows creating IPV6 floating ip

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Despite of https://bugs.launchpad.net/neutron/+bug/1323766, neutron creates 
floating IPV6 and associates
  it to the tenant network IPV6.

  To reproduce:
  neutron floatingip-list
  
+--+--+-+-+
  | id   | fixed_ip_address | 
floating_ip_address | port_id |
  
+--+--+-+-+
  | ba8f4339-dbb2-4678-a111-1e2d7f63a0b9 |  | 2005::3   
  | |
  
+--+--+-+-+
  797:devstack> neutron port-list
  
+--+--+---+--+
  | id   | name | mac_address   | fixed_ips 

   |
  
+--+--+---+--+
  | 0bcd1967-e462-4947-a9b3-3b0e1bb159f8 |  | fa:16:3e:93:29:ba | 
{"subnet_id": "9a5a6e03-255f-42db-8b56-955d50ec973b", "ip_address": "2005::2"}  
 |
  | 1f7eaac2-c5e6-481a-b98e-ace276fa581a |  | fa:16:3e:aa:20:db | 
{"subnet_id": "62f2f8e9-e997-47e1-b5c6-5fa888b270e9", "ip_address": "feee::2"}  
 |
  | 713f57e8-df16-49bf-97fd-b41bd505de10 |  | fa:16:3e:40:78:2e | 
{"subnet_id": "62f2f8e9-e997-47e1-b5c6-5fa888b270e9", "ip_address": "feee::1"}  
 |
  | d57165ea-f0a8-4d13-9fbf-5a2872887b38 |  | fa:16:3e:c9:b6:df | 
{"subnet_id": "9a5a6e03-255f-42db-8b56-955d50ec973b", "ip_address": "2005::3"}  
 |
  | d6c391f1-c199-4846-aebd-a948877f7f57 |  | fa:16:3e:d4:1f:b4 | 
{"subnet_id": "62f2f8e9-e997-47e1-b5c6-5fa888b270e9", "ip_address": 
"feee::f816:3eff:fed4:1fb4"} |
  
+--

[Yahoo-eng-team] [Bug 1363047] [NEW] test_sql_upgrade and live_test not working for non-sqllite DBs

2014-08-29 Thread Henry Nash
Public bug reported:

It appears that our sql upgrade unit tests are broken for DBs that
properly support FKs (teardown fails due to FK constraints).  I suspect
this is because we no longer have the downgrade steps below 034 (since
they were squashed).

** Affects: keystone
 Importance: High
 Status: New

** Changed in: keystone
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1363047

Title:
  test_sql_upgrade and live_test not working for non-sqllite DBs

Status in OpenStack Identity (Keystone):
  New

Bug description:
  It appears that our sql upgrade unit tests are broken for DBs that
  properly support FKs (teardown fails due to FK constraints).  I
  suspect this is because we no longer have the downgrade steps below
  034 (since they were squashed).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1363047/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363058] [NEW] cfg.CONF.state_path set to wrong value in tests

2014-08-29 Thread Assaf Muller
Public bug reported:

cfg.CONF.state_path is set to a random temporary directory in
neutron.tests.base:BaseTestCase.setUp. This value was then over written
in neutron.tests.unit.__init__. Tests that need to read or otherwise use
cfg.CONF.state_path were getting the directory from which the tests were
running and not the temporary directory specially created for the
current test run.

Note that the usage of state_path to set lock_path, dhcp state path and
the likes was working as expected, and was not affected by this bug.

** Affects: neutron
 Importance: Undecided
 Assignee: Assaf Muller (amuller)
 Status: In Progress


** Tags: unittest

** Changed in: neutron
 Assignee: (unassigned) => Assaf Muller (amuller)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1363058

Title:
  cfg.CONF.state_path set to wrong value in tests

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  cfg.CONF.state_path is set to a random temporary directory in
  neutron.tests.base:BaseTestCase.setUp. This value was then over
  written in neutron.tests.unit.__init__. Tests that need to read or
  otherwise use cfg.CONF.state_path were getting the directory from
  which the tests were running and not the temporary directory specially
  created for the current test run.

  Note that the usage of state_path to set lock_path, dhcp state path
  and the likes was working as expected, and was not affected by this
  bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1363058/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363064] [NEW] Cannot set only one of IPv6 attributes while second is None

2014-08-29 Thread Jacek Świderski
Public bug reported:

When trying to update ipv6 subnet's (created with default None
attributes) ra_mode or address_mode operation is not permited (although
it should be) :

neutron subnet-create --ip-version 6 Test fe80:::/40

+---+---+
| Field | Value 
|
+---+---+
| allocation_pools  | {"start": "fe80:::2", "end": 
"fe80::ff:::::fffe"} |
| cidr  | fe80:::/40
|
| dns_nameservers   |   
|
| enable_dhcp   | True  
|
| gateway_ip| fe80:::1  
|
| host_routes   |   
|
| id| 720d4f22-ee49-40c9-a865-cb31defcf6bd  
|
| ip_version| 6 
|
| ipv6_address_mode |   
|
| ipv6_ra_mode  |   
|
| name  |   
|
| network_id| 124826a4-77ed-4682-8e39-d9090689cb85  
|
| tenant_id | d2b47b4677fb4e30ad1961fb7d51ffdc  
|
+---+---+

neutron subnet-update 720d4f22-ee49-40c9-a865-cb31defcf6bd --ipv6_address_mode 
slaac
Invalid input for operation: ipv6_ra_mode set to 'None' with ipv6_address_mode 
set to 'slaac' is not valid. If both attributes are set, they must be the same 
value.

neutron subnet-update 720d4f22-ee49-40c9-a865-cb31defcf6bd --ipv6_ra_mode slaac
Invalid input for operation: ipv6_ra_mode set to 'slaac' with ipv6_address_mode 
set to 'None' is not valid. If both attributes are set, they must be the same 
value.

Clearly as message states leaving one attribute not set should be allowed (also 
I've found spec where attributes combinations are disccused)
http://specs.openstack.org/openstack/neutron-specs/specs/juno/ipv6-radvd-ra.html

** Affects: neutron
 Importance: Undecided
 Assignee: Jacek Świderski (jacek-swiderski)
 Status: Confirmed


** Tags: ipv6

** Tags added: ipv6

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
 Assignee: (unassigned) => Jacek Świderski (jacek-swiderski)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1363064

Title:
  Cannot set only one of IPv6 attributes while second is None

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  When trying to update ipv6 subnet's (created with default None
  attributes) ra_mode or address_mode operation is not permited
  (although it should be) :

  neutron subnet-create --ip-version 6 Test fe80:::/40

  
+---+---+
  | Field | Value   
  |
  
+---+---+
  | allocation_pools  | {"start": "fe80:::2", "end": 
"fe80::ff:::::fffe"} |
  | cidr  | fe80:::/40  
  |
  | dns_nameservers   | 
  |
  | enable_dhcp   | True
  |
  | gateway_ip| fe80:::1
  |
  | host_routes   | 
  |
  | id| 720d4f22-ee49-40c9-a865-cb31defcf6bd
  |
  | ip_version| 6   
  |
  | ipv6_address_mode | 
  |
  | ipv6_ra_mode  | 
  |
  | name  | 
  |
  | network_id| 124826a4-77ed-4682-8e39-d9090689cb85
  |
 

[Yahoo-eng-team] [Bug 1363103] [NEW] The server has either erred or is incapable of performing the requested operation. (HTTP 500)

2014-08-29 Thread Abhishek Kekane
Public bug reported:

Gate jobs failed on 'gate-tempest-dsvm-neutron-full' with following
error.

ClientException: The server has either erred or is incapable of
performing the requested operation. (HTTP 500) (Request-ID: req-
7d7ab999-1351-43be-bd51-96a100a7cdeb)

Detailed stack trace is:

RESP BODY: {"itemNotFound": {"message": "Instance could not be found", "code": 
404}}
}}}

Traceback (most recent call last):
File "tempest/scenario/test_network_advanced_server_ops.py", line 73, in setUp
create_kwargs=create_kwargs)
File "tempest/scenario/manager.py", line 778, in create_server
self.status_timeout(client.servers, server.id, 'ACTIVE')
File "tempest/scenario/manager.py", line 572, in status_timeout
not_found_exception=not_found_exception)
File "tempest/scenario/manager.py", line 635, in _status_timeout
CONF.compute.build_interval):
File "tempest/test.py", line 614, in call_until_true
if func():
File "tempest/scenario/manager.py", line 606, in check_status
thing = things.get(thing_id)
File "/opt/stack/new/python-novaclient/novaclient/v1_1/servers.py", line 555, 
in get
return self._get("/servers/%s" % base.getid(server), "server")
File "/opt/stack/new/python-novaclient/novaclient/base.py", line 93, in _get
_resp, body = self.api.client.get(url)
File "/opt/stack/new/python-novaclient/novaclient/client.py", line 487, in get
return self._cs_request(url, 'GET', **kwargs)
File "/opt/stack/new/python-novaclient/novaclient/client.py", line 465, in 
_cs_request
resp, body = self._time_request(url, method, **kwargs)
File "/opt/stack/new/python-novaclient/novaclient/client.py", line 439, in 
_time_request
resp, body = self.request(url, method, **kwargs)
File "/opt/stack/new/python-novaclient/novaclient/client.py", line 433, in 
request
raise exceptions.from_response(resp, body, url, method)
 ClientException: The server has either erred or is incapable of performing the 
requested operation. (HTTP 500) (Request-ID: 
req-7d7ab999-1351-43be-bd51-96a100a7cdeb)

Traceback (most recent call last):
StringException: Empty attachments:
stderr
stdout

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: ntt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1363103

Title:
  The server has either erred or is incapable of performing the
  requested operation. (HTTP 500)

Status in OpenStack Compute (Nova):
  New

Bug description:
  Gate jobs failed on 'gate-tempest-dsvm-neutron-full' with following
  error.

  ClientException: The server has either erred or is incapable of
  performing the requested operation. (HTTP 500) (Request-ID: req-
  7d7ab999-1351-43be-bd51-96a100a7cdeb)

  Detailed stack trace is:

  RESP BODY: {"itemNotFound": {"message": "Instance could not be found", 
"code": 404}}
  }}}

  Traceback (most recent call last):
  File "tempest/scenario/test_network_advanced_server_ops.py", line 73, in setUp
  create_kwargs=create_kwargs)
  File "tempest/scenario/manager.py", line 778, in create_server
  self.status_timeout(client.servers, server.id, 'ACTIVE')
  File "tempest/scenario/manager.py", line 572, in status_timeout
  not_found_exception=not_found_exception)
  File "tempest/scenario/manager.py", line 635, in _status_timeout
  CONF.compute.build_interval):
  File "tempest/test.py", line 614, in call_until_true
  if func():
  File "tempest/scenario/manager.py", line 606, in check_status
  thing = things.get(thing_id)
  File "/opt/stack/new/python-novaclient/novaclient/v1_1/servers.py", line 555, 
in get
  return self._get("/servers/%s" % base.getid(server), "server")
  File "/opt/stack/new/python-novaclient/novaclient/base.py", line 93, in _get
  _resp, body = self.api.client.get(url)
  File "/opt/stack/new/python-novaclient/novaclient/client.py", line 487, in get
  return self._cs_request(url, 'GET', **kwargs)
  File "/opt/stack/new/python-novaclient/novaclient/client.py", line 465, in 
_cs_request
  resp, body = self._time_request(url, method, **kwargs)
  File "/opt/stack/new/python-novaclient/novaclient/client.py", line 439, in 
_time_request
  resp, body = self.request(url, method, **kwargs)
  File "/opt/stack/new/python-novaclient/novaclient/client.py", line 433, in 
request
  raise exceptions.from_response(resp, body, url, method)
   ClientException: The server has either erred or is incapable of performing 
the requested operation. (HTTP 500) (Request-ID: 
req-7d7ab999-1351-43be-bd51-96a100a7cdeb)

  Traceback (most recent call last):
  StringException: Empty attachments:
  stderr
  stdout

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1363103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363119] [NEW] nova messaging queues slow in idle cluster

2014-08-29 Thread Paul Griffin
Public bug reported:

With hundreds of hosts, a cluster with little to no activity can
experience prolonged delays in message processing, eventually rendering
some or all hosts unresponsive to nova commands (boot instance, etc).

** Affects: nova
 Importance: Undecided
 Assignee: Paul Griffin (paul-griffin)
 Status: New


** Tags: performance scalability

** Description changed:

  With hundreds of hosts, a cluster with little to no activity can
  experience prolonged delays in message processing, eventually rendering
- some or all hosts unusable.
+ some or all hosts unresponsive to nova commands (boot instance, etc).

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1363119

Title:
  nova messaging queues slow in idle cluster

Status in OpenStack Compute (Nova):
  New

Bug description:
  With hundreds of hosts, a cluster with little to no activity can
  experience prolonged delays in message processing, eventually
  rendering some or all hosts unresponsive to nova commands (boot
  instance, etc).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1363119/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363159] [NEW] Gate failing on test_set_external_network_empty: ('Connection aborted.', gaierror(-2, 'Name or service not known'))

2014-08-29 Thread Julie Pichon
Public bug reported:

All patches are failing with the following error. david-lyle and fungi
debugged on #openstack-infra that this was due to requests being bumped
to 2.4.0.

2014-08-29 15:16:30.485 | 
==
2014-08-29 15:16:30.485 | ERROR: test_set_external_network_empty 
(openstack_dashboard.dashboards.admin.routers.tests.RouterTests)
2014-08-29 15:16:30.523 | 
--
2014-08-29 15:16:30.523 | Traceback (most recent call last):
2014-08-29 15:16:30.523 |   File 
"/home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/test/helpers.py",
 line 77, in instance_stub_out
2014-08-29 15:16:30.524 | return fn(self, *args, **kwargs)
2014-08-29 15:16:30.524 |   File 
"/home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/dashboards/admin/routers/tests.py",
 line 72, in test_set_external_network_empty
2014-08-29 15:16:30.524 | res = self.client.get(self.INDEX_URL)
2014-08-29 15:16:30.524 |   File 
"/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/django/test/client.py",
 line 473, in get
2014-08-29 15:16:30.524 | response = super(Client, self).get(path, 
data=data, **extra)
2014-08-29 15:16:30.524 |   File 
"/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/django/test/client.py",
 line 280, in get
2014-08-29 15:16:30.524 | return self.request(**r)
2014-08-29 15:16:30.524 |   File 
"/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/django/test/client.py",
 line 444, in request
2014-08-29 15:16:30.524 | six.reraise(*exc_info)
2014-08-29 15:16:30.525 |   File 
"/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/django/core/handlers/base.py",
 line 112, in get_response
2014-08-29 15:16:30.525 | response = wrapped_callback(request, 
*callback_args, **callback_kwargs)
2014-08-29 15:16:30.525 |   File 
"/home/jenkins/workspace/gate-horizon-python27/horizon/decorators.py", line 36, 
in dec
2014-08-29 15:16:30.525 | return view_func(request, *args, **kwargs)
2014-08-29 15:16:30.525 |   File 
"/home/jenkins/workspace/gate-horizon-python27/horizon/decorators.py", line 84, 
in dec
2014-08-29 15:16:30.525 | return view_func(request, *args, **kwargs)
2014-08-29 15:16:30.525 |   File 
"/home/jenkins/workspace/gate-horizon-python27/horizon/decorators.py", line 52, 
in dec
2014-08-29 15:16:30.525 | return view_func(request, *args, **kwargs)
2014-08-29 15:16:30.525 |   File 
"/home/jenkins/workspace/gate-horizon-python27/horizon/decorators.py", line 36, 
in dec
2014-08-29 15:16:30.525 | return view_func(request, *args, **kwargs)
2014-08-29 15:16:30.526 |   File 
"/home/jenkins/workspace/gate-horizon-python27/horizon/decorators.py", line 84, 
in dec
2014-08-29 15:16:30.526 | return view_func(request, *args, **kwargs)
2014-08-29 15:16:30.526 |   File 
"/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/django/views/generic/base.py",
 line 69, in view
2014-08-29 15:16:30.526 | return self.dispatch(request, *args, **kwargs)
2014-08-29 15:16:30.526 |   File 
"/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/django/views/generic/base.py",
 line 87, in dispatch
2014-08-29 15:16:30.526 | return handler(request, *args, **kwargs)
2014-08-29 15:16:30.526 |   File 
"/home/jenkins/workspace/gate-horizon-python27/horizon/tables/views.py", line 
157, in get
2014-08-29 15:16:30.526 | handled = self.construct_tables()
2014-08-29 15:16:30.526 |   File 
"/home/jenkins/workspace/gate-horizon-python27/horizon/tables/views.py", line 
148, in construct_tables
2014-08-29 15:16:30.526 | handled = self.handle_table(table)
2014-08-29 15:16:30.527 |   File 
"/home/jenkins/workspace/gate-horizon-python27/horizon/tables/views.py", line 
120, in handle_table
2014-08-29 15:16:30.527 | data = self._get_data_dict()
2014-08-29 15:16:30.527 |   File 
"/home/jenkins/workspace/gate-horizon-python27/horizon/tables/views.py", line 
185, in _get_data_dict
2014-08-29 15:16:30.527 | self._data = {self.table_class._meta.name: 
self.get_data()}
2014-08-29 15:16:30.527 |   File 
"/home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/dashboards/admin/routers/views.py",
 line 56, in get_data
2014-08-29 15:16:30.527 | routers = self._get_routers()
2014-08-29 15:16:30.527 |   File 
"/home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/dashboards/admin/routers/views.py",
 line 43, in _get_routers
2014-08-29 15:16:30.527 | tenant_dict = self._get_tenant_list()
2014-08-29 15:16:30.527 |   File 
"/home/jenkins/workspace/gate-horizon-python27/horizon/utils/memoized.py", line 
90, in wrapped
2014-08-29 15:16:30.528 | value = cache[key] = func(*args, **kwargs)
2014-08-29 15:16:30.528 |   File 
"/home/

[Yahoo-eng-team] [Bug 1363188] [NEW] Change user settings as non-admin user changes for all the users across all the projects

2014-08-29 Thread Amogh
Public bug reported:

1. Login as non-admin user
2. Go to user settings and changed the settings.
3. log out
4. login as admin or any other user created in any other project
5. Go to settings and observe that settings applied by non-admin are still 
displayed.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "User_settings-non-admin.PNG"
   
https://bugs.launchpad.net/bugs/1363188/+attachment/4190309/+files/User_settings-non-admin.PNG

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1363188

Title:
  Change user settings as non-admin user changes for all the users
  across all the projects

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  1. Login as non-admin user
  2. Go to user settings and changed the settings.
  3. log out
  4. login as admin or any other user created in any other project
  5. Go to settings and observe that settings applied by non-admin are still 
displayed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1363188/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363199] [NEW] Update rpc version in Arista Plugin

2014-08-29 Thread Sukhdev Kapur
Public bug reported:

RPC version in L3 router service plugin and L3 agent has been upgraded
to 1.3. Update the version in Arista service plugin from 1.2 to 1.3 to
be compatible.

** Affects: neutron
 Importance: Undecided
 Assignee: Sukhdev Kapur (sukhdev-8)
 Status: New


** Tags: arista

** Changed in: neutron
 Assignee: (unassigned) => Sukhdev Kapur (sukhdev-8)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1363199

Title:
  Update rpc version in Arista Plugin

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  RPC version in L3 router service plugin and L3 agent has been upgraded
  to 1.3. Update the version in Arista service plugin from 1.2 to 1.3 to
  be compatible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1363199/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363221] [NEW] tenant_list is not mocked in admin.routers.tests

2014-08-29 Thread Akihiro Motoki
Public bug reported:

commit d9ecd8d15759297a15382d7353482a2eff4b26bb 
(https://review.openstack.org/#/c/75934/) added a new test to 
openstack_dashboard.admin.routers.tests, but api.keystone.tenant_list is not 
mocked.
As a result, unit tests fails like 
http://logs.openstack.org/88/114088/6/check/gate-horizon-python27/cce2205/dashboard_nose_results.html.

Curiously enough,  it doesn't always occurs. I cannot reproduce it in my
local env but OpenStack CI jenkins fails.

ft22.5: 
openstack_dashboard.dashboards.admin.routers.tests.RouterTests.test_set_external_network_emptyTraceback
 (most recent call last):
  File "/usr/lib/python2.7/unittest/case.py", line 331, in run
testMethod()
  File 
"/home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/test/helpers.py",
 line 77, in instance_stub_out
return fn(self, *args, **kwargs)
  File 
"/home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/dashboards/admin/routers/tests.py",
 line 72, in test_set_external_network_empty
res = self.client.get(self.INDEX_URL)
  File 
"/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/django/test/client.py",
 line 473, in get
response = super(Client, self).get(path, data=data, **extra)
  File 
"/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/django/test/client.py",
 line 280, in get
return self.request(**r)
  File 
"/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/django/test/client.py",
 line 444, in request
six.reraise(*exc_info)
  File 
"/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/django/core/handlers/base.py",
 line 112, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
  File "/home/jenkins/workspace/gate-horizon-python27/horizon/decorators.py", 
line 36, in dec
return view_func(request, *args, **kwargs)
  File "/home/jenkins/workspace/gate-horizon-python27/horizon/decorators.py", 
line 84, in dec
return view_func(request, *args, **kwargs)
  File "/home/jenkins/workspace/gate-horizon-python27/horizon/decorators.py", 
line 52, in dec
return view_func(request, *args, **kwargs)
  File "/home/jenkins/workspace/gate-horizon-python27/horizon/decorators.py", 
line 36, in dec
return view_func(request, *args, **kwargs)
  File "/home/jenkins/workspace/gate-horizon-python27/horizon/decorators.py", 
line 84, in dec
return view_func(request, *args, **kwargs)
  File 
"/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/django/views/generic/base.py",
 line 69, in view
return self.dispatch(request, *args, **kwargs)
  File 
"/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/django/views/generic/base.py",
 line 87, in dispatch
return handler(request, *args, **kwargs)
  File "/home/jenkins/workspace/gate-horizon-python27/horizon/tables/views.py", 
line 157, in get
handled = self.construct_tables()
  File "/home/jenkins/workspace/gate-horizon-python27/horizon/tables/views.py", 
line 148, in construct_tables
handled = self.handle_table(table)
  File "/home/jenkins/workspace/gate-horizon-python27/horizon/tables/views.py", 
line 120, in handle_table
data = self._get_data_dict()
  File "/home/jenkins/workspace/gate-horizon-python27/horizon/tables/views.py", 
line 185, in _get_data_dict
self._data = {self.table_class._meta.name: self.get_data()}
  File 
"/home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/dashboards/admin/routers/views.py",
 line 56, in get_data
routers = self._get_routers()
  File 
"/home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/dashboards/admin/routers/views.py",
 line 43, in _get_routers
tenant_dict = self._get_tenant_list()
  File 
"/home/jenkins/workspace/gate-horizon-python27/horizon/utils/memoized.py", line 
90, in wrapped
value = cache[key] = func(*args, **kwargs)
  File 
"/home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/dashboards/admin/networks/views.py",
 line 50, in _get_tenant_list
exceptions.handle(self.request, msg)
  File "/home/jenkins/workspace/gate-horizon-python27/horizon/exceptions.py", 
line 334, in handle
six.reraise(exc_type, exc_value, exc_traceback)
  File 
"/home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/dashboards/admin/networks/views.py",
 line 46, in _get_tenant_list
tenants, has_more = api.keystone.tenant_list(self.request)
  File 
"/home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/api/keystone.py",
 line 271, in tenant_list
tenants = manager.list(domain=domain, user=user)
  File 
"/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/keystoneclient/utils.py",
 line 318, in inner
return func(*args, **kwargs)
  File 
"/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/li

[Yahoo-eng-team] [Bug 1362863] Re: reply queues fill up with unacked messages

2014-08-29 Thread Joe Gordon
** Also affects: oslo.messaging
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362863

Title:
  reply queues fill up with unacked messages

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  New
Status in Messaging API for OpenStack:
  New

Bug description:
  Since upgrading to icehouse we consistently get reply_x queues
  filling up with unacked messages. To fix this I have to restart the
  service. This seems to happen when something is wrong for a short
  period of time and it doesn't clean up after itself.

  So far I've seen the issue with nova-api, nova-compute, nova-network,
  nova-api-metadata, cinder-api but I'm sure there are others.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1362863/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363224] [NEW] token_flush is failing with recursion depth error

2014-08-29 Thread Brant Knudson
Public bug reported:


When I run `keystone-manage token_flush` on a new devstack install, it fails 
with a recursion depth error:

$ keystone-manage token_flush
2014-08-29 14:21:58.933 CRITICAL keystone [-] RuntimeError: maximum recursion 
depth exceeded while calling a Python object

2014-08-29 14:21:58.933 TRACE keystone Traceback (most recent call last):
2014-08-29 14:21:58.933 TRACE keystone   File "/usr/local/bin/keystone-manage", 
line 6, in 
2014-08-29 14:21:58.933 TRACE keystone exec(compile(open(__file__).read(), 
__file__, 'exec'))
2014-08-29 14:21:58.933 TRACE keystone   File 
"/opt/stack/keystone/bin/keystone-manage", line 44, in 
2014-08-29 14:21:58.933 TRACE keystone cli.main(argv=sys.argv, 
config_files=config_files)
2014-08-29 14:21:58.933 TRACE keystone   File 
"/opt/stack/keystone/keystone/cli.py", line 292, in main
2014-08-29 14:21:58.933 TRACE keystone CONF.command.cmd_class.main()
2014-08-29 14:21:58.933 TRACE keystone   File 
"/opt/stack/keystone/keystone/cli.py", line 176, in main
2014-08-29 14:21:58.933 TRACE keystone 
token_manager.driver.flush_expired_tokens()
2014-08-29 14:21:58.933 TRACE keystone   File 
"/opt/stack/keystone/keystone/token/persistence/core.py", line 231, in 
__getattr__
2014-08-29 14:21:58.933 TRACE keystone f = 
getattr(self.token_provider_api._persistence, item)
...
2014-08-29 14:21:58.933 TRACE keystone   File 
"/opt/stack/keystone/keystone/token/persistence/core.py", line 231, in 
__getattr__
2014-08-29 14:21:58.933 TRACE keystone f = 
getattr(self.token_provider_api._persistence, item)
2014-08-29 14:21:58.933 TRACE keystone RuntimeError: maximum recursion depth 
exceeded while calling a Python object

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1363224

Title:
  token_flush is failing with recursion depth error

Status in OpenStack Identity (Keystone):
  New

Bug description:
  
  When I run `keystone-manage token_flush` on a new devstack install, it fails 
with a recursion depth error:

  $ keystone-manage token_flush
  2014-08-29 14:21:58.933 CRITICAL keystone [-] RuntimeError: maximum recursion 
depth exceeded while calling a Python object

  2014-08-29 14:21:58.933 TRACE keystone Traceback (most recent call last):
  2014-08-29 14:21:58.933 TRACE keystone   File 
"/usr/local/bin/keystone-manage", line 6, in 
  2014-08-29 14:21:58.933 TRACE keystone 
exec(compile(open(__file__).read(), __file__, 'exec'))
  2014-08-29 14:21:58.933 TRACE keystone   File 
"/opt/stack/keystone/bin/keystone-manage", line 44, in 
  2014-08-29 14:21:58.933 TRACE keystone cli.main(argv=sys.argv, 
config_files=config_files)
  2014-08-29 14:21:58.933 TRACE keystone   File 
"/opt/stack/keystone/keystone/cli.py", line 292, in main
  2014-08-29 14:21:58.933 TRACE keystone CONF.command.cmd_class.main()
  2014-08-29 14:21:58.933 TRACE keystone   File 
"/opt/stack/keystone/keystone/cli.py", line 176, in main
  2014-08-29 14:21:58.933 TRACE keystone 
token_manager.driver.flush_expired_tokens()
  2014-08-29 14:21:58.933 TRACE keystone   File 
"/opt/stack/keystone/keystone/token/persistence/core.py", line 231, in 
__getattr__
  2014-08-29 14:21:58.933 TRACE keystone f = 
getattr(self.token_provider_api._persistence, item)
  ...
  2014-08-29 14:21:58.933 TRACE keystone   File 
"/opt/stack/keystone/keystone/token/persistence/core.py", line 231, in 
__getattr__
  2014-08-29 14:21:58.933 TRACE keystone f = 
getattr(self.token_provider_api._persistence, item)
  2014-08-29 14:21:58.933 TRACE keystone RuntimeError: maximum recursion depth 
exceeded while calling a Python object

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1363224/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363231] [NEW] Periodic thread lockup

2014-08-29 Thread Brian Elliott
Public bug reported:

The instance locking introduced in
cc5388bbe81aba635fb757e202d860aeed98f3e8 keeps the power state sane
between stop and the periodic task power sync.  However, locking on
an instance in the periodic task thread can potentially lock
that thread for a long time.

Example:
1) User boots an instance.  The instance gets locked by uuid.
2) Driver spawn begins and the image starts downloading from glance.
3) During spawn, periodic tasks run.  Sync power states tries to grab
the same instance lock by uuid.
4) Periodic task thread hangs until the driver spawn completes in
another greenthread.

This scenario results in nova-compute appearing unresponsive for
a long time.

** Affects: nova
 Importance: Undecided
 Assignee: Brian Elliott (belliott)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1363231

Title:
  Periodic thread lockup

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  The instance locking introduced in
  cc5388bbe81aba635fb757e202d860aeed98f3e8 keeps the power state sane
  between stop and the periodic task power sync.  However, locking on
  an instance in the periodic task thread can potentially lock
  that thread for a long time.

  Example:
  1) User boots an instance.  The instance gets locked by uuid.
  2) Driver spawn begins and the image starts downloading from glance.
  3) During spawn, periodic tasks run.  Sync power states tries to grab
  the same instance lock by uuid.
  4) Periodic task thread hangs until the driver spawn completes in
  another greenthread.

  This scenario results in nova-compute appearing unresponsive for
  a long time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1363231/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1360483] Re: A VM fails to boot, yet the status is shown as Active and Running

2014-08-29 Thread Joe Gordon
Pawal is correct, if the hypervisor is reporting 'active' then nova
thinks it is active.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1360483

Title:
  A VM fails to boot, yet the status is shown as Active and Running

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  While running tempest test test_neutron_basic_ops(), I noticed that
  the test was failing. Upon further triaging the issue I noticed that
  the root cause of the problem is that VM fails to boot (the message on
  VM's console says something like kernel is not compatible with the
  CPU). However, the status of the VM is reported as active and running.

  Even Horizon shows the VM as active and in running state where as the
  VM is not booted.

  This status is incorrect. It should be fixed to report correct status.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1360483/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363250] [NEW] Filter button on System Info page throws 404

2014-08-29 Thread Mohan Seri
Public bug reported:

Navigate to Admin > System > System Info. 
Click on Filter button from any of the tab. 
Notice that the page is broken.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1363250

Title:
  Filter button on System Info page throws 404

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Navigate to Admin > System > System Info. 
  Click on Filter button from any of the tab. 
  Notice that the page is broken.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1363250/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1354354] Re: No network after live-migration

2014-08-29 Thread Joe Gordon
Do you have any more information? the versions used? logs etc?

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1354354

Title:
  No network after live-migration

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Incomplete

Bug description:
  During live-migration port-update is send to neutron after plug_vifs is 
executed. In a setup with neutron ml2 plugin where two nodes require different 
VIF_TYPE, migrating a VM from one node to the other will result in VM having no 
network connectivity. 
  vif bindings should be updated before plug_vifs is called.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1354354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363265] [NEW] Update Cisco CSR routing device driver to use REST API

2014-08-29 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Current device driver uses NETCONF. Update the Cisco CSR1kv routing device 
driver from to
use the REST API for configuration as the other VPN and FW services are using.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
Update Cisco CSR routing device driver to use REST API
https://bugs.launchpad.net/bugs/1363265
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363265] [NEW] Update Cisco CSR routing device driver to use REST API

2014-08-29 Thread Tom Holtzen
Public bug reported:

Current device driver uses NETCONF. Update the Cisco CSR1kv routing device 
driver from to
use the REST API for configuration as the other VPN and FW services are using.

** Affects: neutron
 Importance: Undecided
 Assignee: Tom Holtzen (tholtzen)
 Status: New

** Project changed: barbican => neutron

** Changed in: neutron
 Assignee: (unassigned) => Tom Holtzen (tholtzen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1363265

Title:
  Update Cisco CSR routing device driver to use REST API

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Current device driver uses NETCONF. Update the Cisco CSR1kv routing device 
driver from to
  use the REST API for configuration as the other VPN and FW services are using.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1363265/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261976] Re: create tenant with no enabled field doesn't set it automatically to True

2014-08-29 Thread Morgan Fainberg
** Changed in: keystone/havana
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1261976

Title:
  create tenant with no enabled field doesn't set it automatically to
  True

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone havana series:
  Won't Fix

Bug description:
  If an HTTP request to create a tenant doesn't contain the "enabled"
  field like this:

$ curl -i -X POST http://127.0.0.1:35357/v2.0/tenants -H "User-
  Agent: python-keystoneclient" -H "Content-Type: application/json" -H
  "X-Auth-Token: " -d '{"tenant": {"name": "test"}}'

  Result is this:

  {"tenant": {"description": null, "id":
  "3e82042c15f7423d8e032bad31c6ba5f", "name": "test"}}

  While the expected result is to have enabled by default set to True:

  {"tenant": {"description": null, "id":
  "3e82042c15f7423d8e032bad31c6ba5f", "name": "test", "enabled": true}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1261976/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363288] [NEW] I found typo in https://github.com/openstack/keystone/blob/master/keystone/common/controller.py "sane" in place of "same"

2014-08-29 Thread Sarvesh Ranjan
Public bug reported:

https://github.com/openstack/keystone/blob/master/keystone/common/controller.py

I found type in controller.py file. "sane" was written instead of
"same".

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1363288

Title:
  I found typo in
  
https://github.com/openstack/keystone/blob/master/keystone/common/controller.py
  "sane" in place of "same"

Status in OpenStack Identity (Keystone):
  New

Bug description:
  
https://github.com/openstack/keystone/blob/master/keystone/common/controller.py

  I found type in controller.py file. "sane" was written instead of
  "same".

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1363288/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363289] [NEW] Typos in base64utils.py file

2014-08-29 Thread Sarvesh Ranjan
Public bug reported:

In 
https://github.com/openstack/keystone/blob/master/keystone/common/base64utils.py
Typos:
Line No : 143 "enconding" in place of "encoding"
Line No : 296 and 300 "multple" in place of "multiple"
Line No :313, 350 and 372 "whitepace" in place of "whitespace"

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1363289

Title:
  Typos in base64utils.py file

Status in OpenStack Identity (Keystone):
  New

Bug description:
  In 
https://github.com/openstack/keystone/blob/master/keystone/common/base64utils.py
  Typos:
  Line No : 143 "enconding" in place of "encoding"
  Line No : 296 and 300 "multple" in place of "multiple"
  Line No :313, 350 and 372 "whitepace" in place of "whitespace"

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1363289/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295876] Re: libvirtError: internal error unable to add domain xxx to cgroup: No space left on device

2014-08-29 Thread Joe Gordon
no hits in a while looks like changing the version of libvirt fixed this

** Changed in: nova
   Status: Confirmed => Incomplete

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1295876

Title:
   libvirtError: internal error unable to add domain xxx to cgroup: No
  space left on device

Status in OpenStack Compute (Nova):
  Invalid
Status in “libvirt” package in Ubuntu:
  Confirmed

Bug description:
  logstash query:   message:"cgroup\: No space left on device" AND
  filename:logs*screen-n-cpu.txt

  http://logs.openstack.org/12/80412/8/check/check-tempest-dsvm-
  postgres-
  full/f9f6158/logs/screen-n-cpu.txt.gz?level=TRACE#_2014-03-21_17_45_12_490

  
  ERROR nova.compute.manager [req-630b71d6-0fbe-4e9e-99fe-019da7d29a3a 
FixedIPsNegativeTestJson-475659359 FixedIPsNegativeTestJson-265680949] 
[instance: 3f281136-ed69-4bfb-bf36-a7d4aa1c0640] Error: internal error unable 
to add domain instance-0002 task 3057 to cgroup: No space left on device
  26028 TRACE nova.compute.manager [instance: 
3f281136-ed69-4bfb-bf36-a7d4aa1c0640] Traceback (most recent call last):
  26028 TRACE nova.compute.manager [instance: 
3f281136-ed69-4bfb-bf36-a7d4aa1c0640]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 1304, in _build_instance
  26028 TRACE nova.compute.manager [instance: 
3f281136-ed69-4bfb-bf36-a7d4aa1c0640] set_access_ip=set_access_ip)
  26028 TRACE nova.compute.manager [instance: 
3f281136-ed69-4bfb-bf36-a7d4aa1c0640]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 394, in decorated_function
  26028 TRACE nova.compute.manager [instance: 
3f281136-ed69-4bfb-bf36-a7d4aa1c0640] return function(self, context, *args, 
**kwargs)
  26028 TRACE nova.compute.manager [instance: 
3f281136-ed69-4bfb-bf36-a7d4aa1c0640]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 1716, in _spawn
  26028 TRACE nova.compute.manager [instance: 
3f281136-ed69-4bfb-bf36-a7d4aa1c0640] LOG.exception(_('Instance failed to 
spawn'), instance=instance)
  26028 TRACE nova.compute.manager [instance: 
3f281136-ed69-4bfb-bf36-a7d4aa1c0640]   File 
"/opt/stack/new/nova/nova/openstack/common/excutils.py", line 68, in __exit__
  26028 TRACE nova.compute.manager [instance: 
3f281136-ed69-4bfb-bf36-a7d4aa1c0640] six.reraise(self.type_, self.value, 
self.tb)
  26028 TRACE nova.compute.manager [instance: 
3f281136-ed69-4bfb-bf36-a7d4aa1c0640]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 1713, in _spawn
  26028 TRACE nova.compute.manager [instance: 
3f281136-ed69-4bfb-bf36-a7d4aa1c0640] block_device_info)
  26028 TRACE nova.compute.manager [instance: 
3f281136-ed69-4bfb-bf36-a7d4aa1c0640]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2241, in spawn
  26028 TRACE nova.compute.manager [instance: 
3f281136-ed69-4bfb-bf36-a7d4aa1c0640] block_device_info)
  26028 TRACE nova.compute.manager [instance: 
3f281136-ed69-4bfb-bf36-a7d4aa1c0640]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 3621, in 
_create_domain_and_network
  26028 TRACE nova.compute.manager [instance: 
3f281136-ed69-4bfb-bf36-a7d4aa1c0640] power_on=power_on)
  26028 TRACE nova.compute.manager [instance: 
3f281136-ed69-4bfb-bf36-a7d4aa1c0640]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 3531, in _create_domain
  26028 TRACE nova.compute.manager [instance: 
3f281136-ed69-4bfb-bf36-a7d4aa1c0640] domain.XMLDesc(0))
  26028 TRACE nova.compute.manager [instance: 
3f281136-ed69-4bfb-bf36-a7d4aa1c0640]   File 
"/opt/stack/new/nova/nova/openstack/common/excutils.py", line 68, in __exit__
  26028 TRACE nova.compute.manager [instance: 
3f281136-ed69-4bfb-bf36-a7d4aa1c0640] six.reraise(self.type_, self.value, 
self.tb)
  26028 TRACE nova.compute.manager [instance: 
3f281136-ed69-4bfb-bf36-a7d4aa1c0640]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 3526, in _create_domain
  26028 TRACE nova.compute.manager [instance: 
3f281136-ed69-4bfb-bf36-a7d4aa1c0640] domain.createWithFlags(launch_flags)
  26028 TRACE nova.compute.manager [instance: 
3f281136-ed69-4bfb-bf36-a7d4aa1c0640]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 179, in doit
  26028 TRACE nova.compute.manager [instance: 
3f281136-ed69-4bfb-bf36-a7d4aa1c0640] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
  26028 TRACE nova.compute.manager [instance: 
3f281136-ed69-4bfb-bf36-a7d4aa1c0640]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 139, in 
proxy_call
  26028 TRACE nova.compute.manager [instance: 
3f281136-ed69-4bfb-bf36-a7d4aa1c0640] rv = execute(f,*args,**kwargs)
  26028 TRACE nova.compute.manager [instance: 
3f281136-ed69-4bfb-bf36-a7d4aa1c0640]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 77, in tworker
 

[Yahoo-eng-team] [Bug 1363315] [NEW] Process exited while connecting to monitor

2014-08-29 Thread Joshua Harlow
Public bug reported:

Seeing a gate failure that seems more specific to nova vs tempest or
other:

This could be a libvirt memory error or other (I'm not sure) but wanted
to file it for public good...

---
2014-08-29 06:05:08.200 ERROR oslo.messaging._drivers.common 
[req-2ca387e1-72ad-4fca-bdb6-a9977f6469dd ImagesTestXML-1981953716 
ImagesTestXML-147202609] ['Traceback (most recent call last):\n', '  File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply\nincoming.message))\n', '  File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, 
args)\n', '  File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch\nresult = getattr(endpoint, method)(ctxt, 
**new_args)\n', '  File "/opt/stack/new/nova/nova/exception.py", line 88, in 
wrapped\npayload)\n', '  File 
"/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__\n 
   six.reraise(self.type_, self.value, self.tb)\n', '  File 
"/opt/stack/new/nova/nova/exception.py", line 71, in wrapped\nreturn 
f(self, con
 text, *args, **kw)\n', '  File "/opt/stack/new/nova/nova/compute/manager.py", 
line 296, in decorated_function\npass\n', '  File 
"/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__\n 
   six.reraise(self.type_, self.value, self.tb)\n', '  File 
"/opt/stack/new/nova/nova/compute/manager.py", line 282, in 
decorated_function\nreturn function(self, context, *args, **kwargs)\n', '  
File "/opt/stack/new/nova/nova/compute/manager.py", line 324, in 
decorated_function\nkwargs[\'instance\'], e, sys.exc_info())\n', '  File 
"/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__\n 
   six.reraise(self.type_, self.value, self.tb)\n', '  File 
"/opt/stack/new/nova/nova/compute/manager.py", line 312, in 
decorated_function\nreturn function(self, context, *args, **kwargs)\n', '  
File "/opt/stack/new/nova/nova/compute/manager.py", line 372, in 
decorated_function\ninstance=instance)\n', '  File 
"/opt/stack/new/nova/nova/openstack/common/ex
 cutils.py", line 82, in __exit__\nsix.reraise(self.type_, self.value, 
self.tb)\n', '  File "/opt/stack/new/nova/nova/compute/manager.py", line 362, 
in decorated_function\n*args, **kwargs)\n', '  File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2886, in 
snapshot_instance\ntask_states.IMAGE_SNAPSHOT)\n', '  File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2917, in 
_snapshot_instance\nupdate_task_state)\n', '  File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 1628, in snapshot\n
new_dom = self._create_domain(domain=virt_dom)\n', '  File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 3952, in 
_create_domain\nLOG.error(err)\n', '  File 
"/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__\n 
   six.reraise(self.type_, self.value, self.tb)\n', '  File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 3943, in 
_create_domain\ndomain.createWithFlags(launch_flags)\n', '  File 
"/usr/lib/python2.7/dist-
 packages/eventlet/tpool.py", line 179, in doit\nresult = 
proxy_call(self._autowrap, f, *args, **kwargs)\n', '  File 
"/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 139, in proxy_call\n 
   rv = execute(f,*args,**kwargs)\n', '  File 
"/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 77, in tworker\n
rv = meth(*args,**kwargs)\n', '  File 
"/usr/lib/python2.7/dist-packages/libvirt.py", line 896, in createWithFlags\n   
 if ret == -1: raise libvirtError (\'virDomainCreateWithFlags() failed\', 
dom=self)\n', 'libvirtError: internal error: process exited while connecting to 
monitor: \n']
---

Exception @ http://logs.openstack.org/89/116489/10/check/check-tempest-
dsvm-postgres-full/82c6fa0/logs/screen-n-cpu.txt.gz?level=WARNING

Full logs @ http://logs.openstack.org/89/116489/10/check/check-tempest-
dsvm-postgres-full/82c6fa0/

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1363315

Title:
  Process exited while connecting to monitor

Status in OpenStack Compute (Nova):
  New

Bug description:
  Seeing a gate failure that seems more specific to nova vs tempest or
  other:

  This could be a libvirt memory error or other (I'm not sure) but
  wanted to file it for public good...

  ---
  2014-08-29 06:05:08.200 ERROR oslo.messaging._drivers.common 
[req-2ca387e1-72ad-4fca-bdb6-a9977f6469dd ImagesTestXML-1981953716 
ImagesTestXML-147202609] ['Traceback (most recent call last):\n', '  File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply\nincoming.m

[Yahoo-eng-team] [Bug 1356051] Re: Cannot load 'instance' in the base class - problem in floating-ip-list

2014-08-29 Thread Gloria Gu
** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1356051

Title:
  Cannot load 'instance' in the base class - problem in floating-ip-list

Status in OpenStack Compute (Nova):
  Fix Committed

Bug description:
  I tried the following on VMware using the VMwareVCDriver with nova-
  network:

  1. Create an instance

  2. Create a floating IP: $ nova floating-ip-create

  3. Associate a floating IP with the instance: $ nova floating-ip-
  associate test1 10.131.254.249

  4. Attempt a list of the floating IPs:
  $ nova floating-ip-list
  ERROR (ClientException): The server has either erred or is incapable of 
performing the requested operation. (HTTP 500) (Request-ID: 
req-dcb17077-c670-4e2a-8a34-715a8afc5f33)

  
  It failed and printed out the following messages in n-api logs:

  2014-08-12 13:54:29.578 ERROR nova.api.openstack 
[req-86d8f466-cfae-42ac-8340-9eac36d6fc71 demo demo] Caught error: Cannot load 
'instance' in the base class
  2014-08-12 13:54:29.578 TRACE nova.api.openstack Traceback (most recent call 
last):
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/__init__.py", line 124, in __call__
  2014-08-12 13:54:29.578 TRACE nova.api.openstack return 
req.get_response(self.application)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1320, in send
  2014-08-12 13:54:29.578 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1284, in 
call_application
  2014-08-12 13:54:29.578 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2014-08-12 13:54:29.578 TRACE nova.api.openstack return resp(environ, 
start_response)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py", line 
565, in __call__
  2014-08-12 13:54:29.578 TRACE nova.api.openstack return self._app(env, 
start_response)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2014-08-12 13:54:29.578 TRACE nova.api.openstack return resp(environ, 
start_response)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2014-08-12 13:54:29.578 TRACE nova.api.openstack return resp(environ, 
start_response)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/routes/middleware.py", line 131, in __call__
  2014-08-12 13:54:29.578 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2014-08-12 13:54:29.578 TRACE nova.api.openstack return resp(environ, 
start_response)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
  2014-08-12 13:54:29.578 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
  2014-08-12 13:54:29.578 TRACE nova.api.openstack return self.func(req, 
*args, **kwargs)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/wsgi.py", line 908, in __call__
  2014-08-12 13:54:29.578 TRACE nova.api.openstack content_type, body, 
accept)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/wsgi.py", line 974, in _process_stack
  2014-08-12 13:54:29.578 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/wsgi.py", line 1058, in dispatch
  2014-08-12 13:54:29.578 TRACE nova.api.openstack return 
method(req=request, **action_args)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/compute/contrib/floating_ips.py", line 146, 
in index
  2014-08-12 13:54:29.578 TRACE nova.api.openstack 
self._normalize_ip(floating_ip)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/compute/contrib/floating_ips.py", line 117, 
in _normalize_ip
  2014-08-12 13:54:29.578 TRACE nova.api.openstack floating_ip['instance'] 
= fixed_ip['inst

[Yahoo-eng-team] [Bug 1310896] Re: "EC2ResponseError: EC2ResponseError: 400 Bad Request" in tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_integration_1

2014-08-29 Thread Joe Gordon
It appears this is due to tempest having bad cleanup logic after a test
fails. So it looks like this is a tempest issue.

** Also affects: tempest
   Importance: Undecided
   Status: New

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1310896

Title:
  "EC2ResponseError: EC2ResponseError: 400 Bad Request" in
  
tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_integration_1

Status in Tempest:
  New

Bug description:
  2014-04-22 00:33:21.197 | 2014-04-22 00:31:41,668 400 Bad Request
  2014-04-22 00:33:21.197 | 2014-04-22 00:31:41,669 
  2014-04-22 00:33:21.197 | 
IncorrectStateVolume 
7ec87937-4a0d-4548-94a1-e85d26b68c08 is not attached to 
anythingreq-8d225660-d8d6-43b8-958e-bf85cd2f4e09
  2014-04-22 00:33:21.197 | }}}
  2014-04-22 00:33:21.197 | 
  2014-04-22 00:33:21.197 | Traceback (most recent call last):
  2014-04-22 00:33:21.197 |   File 
"tempest/thirdparty/boto/test_ec2_instance_run.py", line 310, in 
test_integration_1
  2014-04-22 00:33:21.197 | volume.detach()
  2014-04-22 00:33:21.197 |   File 
"/usr/local/lib/python2.7/dist-packages/boto/ec2/volume.py", line 183, in detach
  2014-04-22 00:33:21.198 | dry_run=dry_run
  2014-04-22 00:33:21.198 |   File 
"/usr/local/lib/python2.7/dist-packages/boto/ec2/connection.py", line 2357, in 
detach_volume
  2014-04-22 00:33:21.198 | return self.get_status('DetachVolume', params, 
verb='POST')
  2014-04-22 00:33:21.198 |   File 
"/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 1196, in 
get_status
  2014-04-22 00:33:21.198 | raise self.ResponseError(response.status, 
response.reason, body)
  2014-04-22 00:33:21.198 | EC2ResponseError: EC2ResponseError: 400 Bad Request
  2014-04-22 00:33:21.198 | 
  2014-04-22 00:33:21.198 | 
IncorrectStateVolume 
7ec87937-4a0d-4548-94a1-e85d26b68c08 is not attached to 
anythingreq-8d225660-d8d6-43b8-958e-bf85cd2f4e09

  
  
http://logs.openstack.org/24/88224/2/check/check-tempest-dsvm-neutron/d060a98/console.html#_2014-04-22_00_33_21_197

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIkVDMlJlc3BvbnNlRXJyb3I6IEVDMlJlc3BvbnNlRXJyb3I6IDQwMCBCYWQgUmVxdWVzdFwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI4NjQwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTgxMzgwODcxMDZ9

To manage notifications about this bug go to:
https://bugs.launchpad.net/tempest/+bug/1310896/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356051] Re: Cannot load 'instance' in the base class - problem in floating-ip-list

2014-08-29 Thread Gloria Gu
we have bug in horizon https://bugs.launchpad.net/horizon/+bug/1361708

** No longer affects: horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1356051

Title:
  Cannot load 'instance' in the base class - problem in floating-ip-list

Status in OpenStack Compute (Nova):
  Fix Committed

Bug description:
  I tried the following on VMware using the VMwareVCDriver with nova-
  network:

  1. Create an instance

  2. Create a floating IP: $ nova floating-ip-create

  3. Associate a floating IP with the instance: $ nova floating-ip-
  associate test1 10.131.254.249

  4. Attempt a list of the floating IPs:
  $ nova floating-ip-list
  ERROR (ClientException): The server has either erred or is incapable of 
performing the requested operation. (HTTP 500) (Request-ID: 
req-dcb17077-c670-4e2a-8a34-715a8afc5f33)

  
  It failed and printed out the following messages in n-api logs:

  2014-08-12 13:54:29.578 ERROR nova.api.openstack 
[req-86d8f466-cfae-42ac-8340-9eac36d6fc71 demo demo] Caught error: Cannot load 
'instance' in the base class
  2014-08-12 13:54:29.578 TRACE nova.api.openstack Traceback (most recent call 
last):
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/__init__.py", line 124, in __call__
  2014-08-12 13:54:29.578 TRACE nova.api.openstack return 
req.get_response(self.application)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1320, in send
  2014-08-12 13:54:29.578 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1284, in 
call_application
  2014-08-12 13:54:29.578 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2014-08-12 13:54:29.578 TRACE nova.api.openstack return resp(environ, 
start_response)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py", line 
565, in __call__
  2014-08-12 13:54:29.578 TRACE nova.api.openstack return self._app(env, 
start_response)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2014-08-12 13:54:29.578 TRACE nova.api.openstack return resp(environ, 
start_response)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2014-08-12 13:54:29.578 TRACE nova.api.openstack return resp(environ, 
start_response)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/routes/middleware.py", line 131, in __call__
  2014-08-12 13:54:29.578 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2014-08-12 13:54:29.578 TRACE nova.api.openstack return resp(environ, 
start_response)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
  2014-08-12 13:54:29.578 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
  2014-08-12 13:54:29.578 TRACE nova.api.openstack return self.func(req, 
*args, **kwargs)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/wsgi.py", line 908, in __call__
  2014-08-12 13:54:29.578 TRACE nova.api.openstack content_type, body, 
accept)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/wsgi.py", line 974, in _process_stack
  2014-08-12 13:54:29.578 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/wsgi.py", line 1058, in dispatch
  2014-08-12 13:54:29.578 TRACE nova.api.openstack return 
method(req=request, **action_args)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/compute/contrib/floating_ips.py", line 146, 
in index
  2014-08-12 13:54:29.578 TRACE nova.api.openstack 
self._normalize_ip(floating_ip)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/compute/contrib/floating_ips.py", line 117, 
in _normalize_ip
  2014-08-12 13:54:29.578 TRACE nova.api.openstack floating

[Yahoo-eng-team] [Bug 1363316] [NEW] Spelling correction

2014-08-29 Thread Sarvesh Ranjan
Public bug reported:

Typos:

Line No : 143 "enconding" in place of "encoding" 
Line No : 296 and 300 "multple" in place of "multiple" 
Line No :313, 350 and 372 "whitepace" in place of "whitespace"

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1363316

Title:
  Spelling correction

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Typos:

  Line No : 143 "enconding" in place of "encoding" 
  Line No : 296 and 300 "multple" in place of "multiple" 
  Line No :313, 350 and 372 "whitepace" in place of "whitespace"

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1363316/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339879] Re: gate tests fail due to intermittent failures

2014-08-29 Thread Joe Gordon
This case is a glance issue: http://logs.openstack.org/51/105751/1/check
/check-tempest-dsvm-
full/5f646ca/logs/screen-g-api.txt.gz?#_2014-07-09_16_10_48_838

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1339879

Title:
  gate tests fail due to intermittent failures

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Tests in the Glance gate seem to be failing due to the following
  error:-

  Details: The server has either erred or is incapable of performing the
  requested operation.

  2014-07-09 16:38:38.863 | ==
  2014-07-09 16:38:38.863 | Failed 1 tests - output below:
  2014-07-09 16:38:38.863 | ==
  2014-07-09 16:38:38.863 | 
  2014-07-09 16:38:38.863 | 
tempest.api.compute.images.test_images.ImagesTestXML.test_delete_saving_image[gate]
  2014-07-09 16:38:38.864 | 
---
  2014-07-09 16:38:38.864 | 
  2014-07-09 16:38:38.864 | Captured traceback:
  2014-07-09 16:38:38.864 | ~~~
  2014-07-09 16:38:38.864 | Traceback (most recent call last):
  2014-07-09 16:38:38.864 |   File 
"tempest/api/compute/images/test_images.py", line 42, in 
test_delete_saving_image
  2014-07-09 16:38:38.864 | resp, body = 
self.client.delete_image(image['id'])
  2014-07-09 16:38:38.864 |   File 
"tempest/services/compute/xml/images_client.py", line 136, in delete_image
  2014-07-09 16:38:38.864 | return self.delete("images/%s" % 
str(image_id))
  2014-07-09 16:38:38.864 |   File "tempest/common/rest_client.py", line 
224, in delete
  2014-07-09 16:38:38.864 | return self.request('DELETE', url, 
extra_headers, headers, body)
  2014-07-09 16:38:38.865 |   File "tempest/common/rest_client.py", line 
430, in request
  2014-07-09 16:38:38.865 | resp, resp_body)
  2014-07-09 16:38:38.865 |   File "tempest/common/rest_client.py", line 
526, in _error_checker
  2014-07-09 16:38:38.865 | raise exceptions.ServerFault(message)
  2014-07-09 16:38:38.865 | ServerFault: Got server fault
  2014-07-09 16:38:38.865 | Details: The server has either erred or is 
incapable of performing the requested operation.
  2014-07-09 16:38:38.865 | 


  ref. http://logs.openstack.org/51/105751/1/check/check-tempest-dsvm-
  full/5f646ca/console.html#_2014-07-09_16_38_38_863

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1339879/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363319] [NEW] Fixed Typo

2014-08-29 Thread Sarvesh Ranjan
Public bug reported:

Fixed Typo 
"cacheing" changed to "caching"

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1363319

Title:
  Fixed Typo

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Fixed Typo 
  "cacheing" changed to "caching"

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1363319/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363103] Re: The server has either erred or is incapable of performing the requested operation. (HTTP 500)

2014-08-29 Thread Joe Gordon
2014-08-29 13:03:59.833 740 TRACE oslo.messaging.rpc.dispatcher raise 
operational_error
2014-08-29 13:03:59.833 740 TRACE oslo.messaging.rpc.dispatcher 
OperationalError: (OperationalError) (1205, 'Lock wait timeout exceeded; try 
restarting transaction') 'INSERT INTO routerl3agentbindings (router_id, 
l3_agent_id) VALUES (%s, %s)' ('1f24da95-56d3-4393-9931-7121e484206c', 
'b7207ce5-504d-47dd-be16-0ccef99c37ef')
2014-08-29 13:03:59.833 740 TRACE oslo.messaging.rpc.dispatcher 

** Changed in: nova
   Status: New => Invalid

** Summary changed:

- The server has either erred or is incapable of performing the requested 
operation. (HTTP 500) 
+ OperationalError: (OperationalError) (1205, 'Lock wait timeout exceeded; try 
restarting transaction') 'INSERT INTO routerl3agentbindings

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1363103

Title:
  OperationalError: (OperationalError) (1205, 'Lock wait timeout
  exceeded; try restarting transaction') 'INSERT INTO
  routerl3agentbindings

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Gate jobs failed on 'gate-tempest-dsvm-neutron-full' with following
  error.

  ClientException: The server has either erred or is incapable of
  performing the requested operation. (HTTP 500) (Request-ID: req-
  7d7ab999-1351-43be-bd51-96a100a7cdeb)

  Detailed stack trace is:

  RESP BODY: {"itemNotFound": {"message": "Instance could not be found", 
"code": 404}}
  }}}

  Traceback (most recent call last):
  File "tempest/scenario/test_network_advanced_server_ops.py", line 73, in setUp
  create_kwargs=create_kwargs)
  File "tempest/scenario/manager.py", line 778, in create_server
  self.status_timeout(client.servers, server.id, 'ACTIVE')
  File "tempest/scenario/manager.py", line 572, in status_timeout
  not_found_exception=not_found_exception)
  File "tempest/scenario/manager.py", line 635, in _status_timeout
  CONF.compute.build_interval):
  File "tempest/test.py", line 614, in call_until_true
  if func():
  File "tempest/scenario/manager.py", line 606, in check_status
  thing = things.get(thing_id)
  File "/opt/stack/new/python-novaclient/novaclient/v1_1/servers.py", line 555, 
in get
  return self._get("/servers/%s" % base.getid(server), "server")
  File "/opt/stack/new/python-novaclient/novaclient/base.py", line 93, in _get
  _resp, body = self.api.client.get(url)
  File "/opt/stack/new/python-novaclient/novaclient/client.py", line 487, in get
  return self._cs_request(url, 'GET', **kwargs)
  File "/opt/stack/new/python-novaclient/novaclient/client.py", line 465, in 
_cs_request
  resp, body = self._time_request(url, method, **kwargs)
  File "/opt/stack/new/python-novaclient/novaclient/client.py", line 439, in 
_time_request
  resp, body = self.request(url, method, **kwargs)
  File "/opt/stack/new/python-novaclient/novaclient/client.py", line 433, in 
request
  raise exceptions.from_response(resp, body, url, method)
   ClientException: The server has either erred or is incapable of performing 
the requested operation. (HTTP 500) (Request-ID: 
req-7d7ab999-1351-43be-bd51-96a100a7cdeb)

  Traceback (most recent call last):
  StringException: Empty attachments:
  stderr
  stdout

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1363103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363103] Re: The server has either erred or is incapable of performing the requested operation. (HTTP 500)

2014-08-29 Thread Joe Gordon
It looks like this is from neutron-server hanging:
http://logs.openstack.org/37/98737/18/gate/gate-tempest-dsvm-neutron-
full/1adef2a/logs/screen-n-api.txt.gz?level=INFO#_2014-08-29_13_03_38_109

http://logs.openstack.org/37/98737/18/gate/gate-tempest-dsvm-neutron-
full/1adef2a/logs/screen-q-svc.txt.gz?level=TRACE#_2014-08-29_13_03_59_833

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1363103

Title:
  OperationalError: (OperationalError) (1205, 'Lock wait timeout
  exceeded; try restarting transaction') 'INSERT INTO
  routerl3agentbindings

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Gate jobs failed on 'gate-tempest-dsvm-neutron-full' with following
  error.

  ClientException: The server has either erred or is incapable of
  performing the requested operation. (HTTP 500) (Request-ID: req-
  7d7ab999-1351-43be-bd51-96a100a7cdeb)

  Detailed stack trace is:

  RESP BODY: {"itemNotFound": {"message": "Instance could not be found", 
"code": 404}}
  }}}

  Traceback (most recent call last):
  File "tempest/scenario/test_network_advanced_server_ops.py", line 73, in setUp
  create_kwargs=create_kwargs)
  File "tempest/scenario/manager.py", line 778, in create_server
  self.status_timeout(client.servers, server.id, 'ACTIVE')
  File "tempest/scenario/manager.py", line 572, in status_timeout
  not_found_exception=not_found_exception)
  File "tempest/scenario/manager.py", line 635, in _status_timeout
  CONF.compute.build_interval):
  File "tempest/test.py", line 614, in call_until_true
  if func():
  File "tempest/scenario/manager.py", line 606, in check_status
  thing = things.get(thing_id)
  File "/opt/stack/new/python-novaclient/novaclient/v1_1/servers.py", line 555, 
in get
  return self._get("/servers/%s" % base.getid(server), "server")
  File "/opt/stack/new/python-novaclient/novaclient/base.py", line 93, in _get
  _resp, body = self.api.client.get(url)
  File "/opt/stack/new/python-novaclient/novaclient/client.py", line 487, in get
  return self._cs_request(url, 'GET', **kwargs)
  File "/opt/stack/new/python-novaclient/novaclient/client.py", line 465, in 
_cs_request
  resp, body = self._time_request(url, method, **kwargs)
  File "/opt/stack/new/python-novaclient/novaclient/client.py", line 439, in 
_time_request
  resp, body = self.request(url, method, **kwargs)
  File "/opt/stack/new/python-novaclient/novaclient/client.py", line 433, in 
request
  raise exceptions.from_response(resp, body, url, method)
   ClientException: The server has either erred or is incapable of performing 
the requested operation. (HTTP 500) (Request-ID: 
req-7d7ab999-1351-43be-bd51-96a100a7cdeb)

  Traceback (most recent call last):
  StringException: Empty attachments:
  stderr
  stdout

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1363103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363324] [NEW] a bug in quota check

2014-08-29 Thread HenryShen
Public bug reported:

\nova\db\sqlalchemy\api.py   quota_reserve()

when decided whether to refresh the user_usages[resource], one rule is
that if the last refresh was too long time ago, we need refresh
user_usages[resource].

 elif max_age and (user_usages[resource].updated_at -
  timeutils.utcnow()).seconds >= max_age:

using last update time minus current time result in a overflow ,so that
the refresh action always be executed,in consideration of the max_age
won't be a max number.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1363324

Title:
  a bug in quota check

Status in OpenStack Compute (Nova):
  New

Bug description:
  \nova\db\sqlalchemy\api.py   quota_reserve()

  when decided whether to refresh the user_usages[resource], one rule is
  that if the last refresh was too long time ago, we need refresh
  user_usages[resource].

   elif max_age and (user_usages[resource].updated_at -
timeutils.utcnow()).seconds >= max_age:

  using last update time minus current time result in a overflow ,so
  that the refresh action always be executed,in consideration of the
  max_age won't be a max number.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1363324/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363326] [NEW] Error retries in _allocate_network

2014-08-29 Thread AiQingxing
Public bug reported:

In /nova/compute/manager.py/def _allocate_network_async,line 1559.

attempts = retries > 1 and retries + 1 or 1
retry_time = 1
for attempt in range(1, attempts + 1):

Variable attempts wants to determine the retry times of allocate network,but it 
made a small mistake. 
See the Simulation results below:
retries=0,attempts=1
retries=1,attempts=1
retries=2,attempts=3
When retries=1, attempts=1 ,It actually does not retry.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1363326

Title:
  Error retries in _allocate_network

Status in OpenStack Compute (Nova):
  New

Bug description:
  In /nova/compute/manager.py/def _allocate_network_async,line 1559.

  attempts = retries > 1 and retries + 1 or 1
  retry_time = 1
  for attempt in range(1, attempts + 1):

  Variable attempts wants to determine the retry times of allocate network,but 
it made a small mistake. 
  See the Simulation results below:
  retries=0,attempts=1
  retries=1,attempts=1
  retries=2,attempts=3
  When retries=1, attempts=1 ,It actually does not retry.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1363326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357677] Re: Instances failes to boot from volume

2014-08-29 Thread John Griffith
I'm not sure why this is logged as a Cinder bug?  Other than the fact
that it's boot from volume perhaps, but the instance appears to boot
correctly and is in ACTIVE state.  The issue here seems to be networking
as the ssh connection fails...  no?

http://logs.openstack.org/75/113175/1/gate/check-tempest-dsvm-neutron-
full/827c854/console.html.gz#_2014-08-14_11_23_29_423

Not sure if this is on the Neutron side or the Nova side, but I suspect
it's a networking issue, regardless doesn't seem like a Cinder issue as
far as I can tell.

** No longer affects: cinder

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357677

Title:
  Instances failes to boot from volume

Status in OpenStack Compute (Nova):
  New

Bug description:
  Logstash query for full console outputs which does not contains 'info:
  initramfs loading root from /dev/vda' , but contains the previous boot
  message.

  These issues look like ssh connectivity issue, but the instance is not
  booted and it happens regardless to the network type.

  message: "Freeing unused kernel memory" AND message: "Initializing
  cgroup subsys cpuset" AND NOT message: "initramfs loading root from"
  AND tags:"console"

  49 incident/week.

  Example console log:
  
http://logs.openstack.org/75/113175/1/gate/check-tempest-dsvm-neutron-full/827c854/console.html.gz#_2014-08-14_11_23_30_120

  It failed when it's tried to ssh 3th server.
  WARNING: The conole.log contains two instances serial console output,  try no 
to mix them when reading.

  The fail point in the test code was here:
  
https://github.com/openstack/tempest/blob/b7144eb08175d010e1300e14f4f75d04d9c63c98/tempest/scenario/test_volume_boot_pattern.py#L175

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357677/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363332] [NEW] Database migration downgrade to havana fails

2014-08-29 Thread Henry Gessau
Public bug reported:

The first migration script after havana (e197124d4b9) has a bad
downgrade.

INFO  [alembic.migration] Running downgrade e197124d4b9 -> havana, add unique 
constraint to members
Traceback (most recent call last):
  File "/home/henry/Dev/neutron/.tox/py27/bin/neutron-db-manage", line 10, in 

sys.exit(main())
  File "/home/henry/Dev/neutron/neutron/db/migration/cli.py", line 175, in main
CONF.command.func(config, CONF.command.name)
  File "/home/henry/Dev/neutron/neutron/db/migration/cli.py", line 85, in 
do_upgrade_downgrade
do_alembic_command(config, cmd, revision, sql=CONF.command.sql)
  File "/home/henry/Dev/neutron/neutron/db/migration/cli.py", line 63, in 
do_alembic_command
getattr(alembic_command, cmd)(config, *args, **kwargs)
  File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/alembic/command.py",
 line 151, in downgrade
script.run_env()
  File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/alembic/script.py",
 line 203, in run_env
util.load_python_file(self.dir, 'env.py')
  File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/alembic/util.py",
 line 215, in load_python_file
module = load_module_py(module_id, path)
  File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/alembic/compat.py",
 line 58, in load_module_py
mod = imp.load_source(module_id, path, fp)
  File 
"/home/henry/Dev/neutron/neutron/db/migration/alembic_migrations/env.py", line 
120, in 
run_migrations_online()
  File 
"/home/henry/Dev/neutron/neutron/db/migration/alembic_migrations/env.py", line 
108, in run_migrations_online
options=build_options())
  File "", line 7, in run_migrations
  File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/alembic/environment.py",
 line 689, in run_migrations
self.get_context().run_migrations(**kw)
  File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/alembic/migration.py",
 line 263, in run_migrations
change(**kw)
  File 
"/home/henry/Dev/neutron/neutron/db/migration/alembic_migrations/versions/e197124d4b9_add_unique_constrain.py",
 line 62, in downgrade
type_='unique'
  File "", line 7, in drop_constraint
  File "", line 1, in 
  File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/alembic/util.py",
 line 332, in go
return fn(*arg, **kw)
  File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/alembic/operations.py",
 line 841, in drop_constraint
self.impl.drop_constraint(const)
  File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/alembic/ddl/impl.py",
 line 138, in drop_constraint
self._exec(schema.DropConstraint(const))
  File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/alembic/ddl/impl.py",
 line 76, in _exec
conn.execute(construct, *multiparams, **params)
  File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 729, in execute
return meth(self, multiparams, params)
  File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/sql/ddl.py",
 line 69, in _execute_on_connection
return connection._execute_ddl(self, multiparams, params)
  File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 783, in _execute_ddl
compiled
  File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 958, in _execute_context
context)
  File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1156, in _handle_dbapi_exception
util.raise_from_cause(newraise, exc_info)
  File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py",
 line 199, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb)
  File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 951, in _execute_context
context)
  File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py",
 line 436, in do_execute
cursor.execute(statement, parameters)
  File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/MySQLdb/cursors.py",
 line 205, in execute
self.errorhandler(self, exc, value)
  File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/MySQLdb/connections.py",
 line 36, in defaulterrorhandler
raise errorclass, errorvalue
sqlalchemy.exc.OperationalError: (OperationalError) (1553, "Cannot drop index 
'uniq_member0pool_id0address0port': needed in a foreign key constraint") 'ALTER 
TABLE members DROP INDEX uniq_member0pool_id0address0port' ()

** Affects: neutron
 Importance: Undecided
 Assignee: Henry Gessau (gessau)
 Status: New


** Tags: db

** Changed in: neutron
 A

[Yahoo-eng-team] [Bug 1322579] Re: dnsmasq doesn't receive dhcp packets

2014-08-29 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1322579

Title:
  dnsmasq doesn't receive dhcp packets

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_hotplug_nic 
failed on testing connection to instance. After reading logs it seems like 
instance didn't get ip address because DHCPDISCOVERY hadn't reached dnsmasq.
  Test failed before any test specific actions so it seems this issue is not 
related to the test case.

  This is the only message in syslog about required ip address
  May 22 22:19:56 devstack-precise-hpcloud-region-b-4249468 dnsmasq-dhcp[7894]: 
DHCPRELEASE(tapc878ddbf-4d) 10.100.0.18 fa:16:3e:ae:c7:2e unknown lease

  http://logs.openstack.org/89/70689/34/gate/gate-tempest-dsvm-
  neutron/f841e50/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1322579/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1197367] Re: 020_migrate_metadata_table_roles fails with "ValueError: Expecting property name"

2014-08-29 Thread Launchpad Bug Tracker
[Expired for Keystone because there has been no activity for 60 days.]

** Changed in: keystone
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1197367

Title:
  020_migrate_metadata_table_roles fails with "ValueError: Expecting
  property name"

Status in OpenStack Identity (Keystone):
  Expired

Bug description:
  Hello All,

  After apt-get update && apt-get upgrade keystone failed:

  Setting up keystone (1:2013.1.2-0ubuntu2~cloud0) ...
  Installing new version of config file /etc/logrotate.d/keystone ...
  Installing new version of config file /etc/init/keystone.conf ...

  Configuration file `/etc/keystone/keystone.conf'
   ==> Modified (by you or by a script) since installation.
   ==> Package distributor has shipped an updated version.
     What would you like to do about it ?  Your options are:
  Y or I  : install the package maintainer's version
  N or O  : keep your currently-installed version
    D : show the differences between the versions
    Z : start a shell to examine the situation
   The default action is to keep your current version.
  *** keystone.conf (Y/I/N/O/D/Z) [default=N] ?
  Installing new version of config file /etc/keystone/policy.json ...
  Traceback (most recent call last):
    File "/usr/bin/keystone-manage", line 28, in 
  cli.main(argv=sys.argv, config_files=config_files)
    File "/usr/lib/python2.7/dist-packages/keystone/cli.py", line 175, in main
  CONF.command.cmd_class.main()
    File "/usr/lib/python2.7/dist-packages/keystone/cli.py", line 54, in main
  driver.db_sync()
    File "/usr/lib/python2.7/dist-packages/keystone/identity/backends/sql.py", 
line 156, in db_sync
  migration.db_sync()
    File "/usr/lib/python2.7/dist-packages/keystone/common/sql/migration.py", 
line 52, in db_sync
  return versioning_api.upgrade(CONF.sql.connection, repo_path, version)
    File "/usr/lib/python2.7/dist-packages/migrate/versioning/api.py", line 
186, in upgrade
  return _migrate(url, repository, version, upgrade=True, err=err, **opts)
    File "", line 2, in _migrate
    File 
"/usr/lib/python2.7/dist-packages/migrate/versioning/util/__init__.py", line 
159, in with_engine
  return f(*a, **kw)
    File "/usr/lib/python2.7/dist-packages/migrate/versioning/api.py", line 
366, in _migrate
  schema.runchange(ver, change, changeset.step)
    File "/usr/lib/python2.7/dist-packages/migrate/versioning/schema.py", line 
91, in runchange
  change.run(self.engine, step)
    File "/usr/lib/python2.7/dist-packages/migrate/versioning/script/py.py", 
line 145, in run
  script_func(engine)
    File 
"/usr/lib/python2.7/dist-packages/keystone/common/sql/migrate_repo/versions/020_migrate_metadata_table_roles.py",
 line 30, in upgrade
  data = json.loads(metadata.data)
    File "/usr/lib/python2.7/json/__init__.py", line 326, in loads
  return _default_decoder.decode(s)
    File "/usr/lib/python2.7/json/decoder.py", line 366, in decode
  obj, end = self.raw_decode(s, idx=_w(s, 0).end())
    File "/usr/lib/python2.7/json/decoder.py", line 382, in raw_decode
  obj, end = self.scan_once(s, idx)
  ValueError: Expecting property name: line 1 column 1 (char 1)
  dpkg: error processing keystone (--configure):
   subprocess installed post-installation script returned error exit status 1

  I use a mysql database and I login with root user to the db so no auth
  issues.

  My keystone not works now:

  (sqlalchemy.engine.base.Engine): 2013-07-03 14:22:03,260 INFO ROLLBACK
  (root): 2013-07-03 14:22:03,260 ERROR (OperationalError) (1054, "Unknown 
column 'token.user_id' in 'field list'") 'SELECT token.id AS token_id, 
token.expires AS token_expires, token.extra AS token_extra, token.valid AS 
token_valid, token.user_id AS token_user_id, token.trust_id AS token_trust_id 
\nFROM token \nWHERE token.valid = %s AND token.id = %s \n LIMIT %s' (1, 
'64814d7d4765b6bfc47cabb81fa36974', 1)
  Traceback (most recent call last):
    File "/usr/lib/python2.7/dist-packages/keystone/common/wsgi.py", line 236, 
in __call__
  result = method(context, **params)
    File "/usr/lib/python2.7/dist-packages/keystone/token/controllers.py", line 
142, in authenticate
  token_id=token_id)
    File "/usr/lib/python2.7/dist-packages/keystone/common/manager.py", line 
47, in _wrapper
  return f(*args, **kw)
    File "/usr/lib/python2.7/dist-packages/keystone/token/backends/sql.py", 
line 46, in get_token
  token_ref = query.first()
    File 
"/usr/local/lib/python2.7/dist-packages/SQLAlchemy-0.7.10-py2.7-linux-x86_64.egg/sqlalchemy/orm/query.py",
 line 2156, in first
  ret = list(self[0:1])
    File 
"/usr/local/lib/python2.7/dist-packages/SQLAlchemy-0.7.10-py2.7-linux-x86_64.egg/sqlalchemy/orm/query.py",
 line 2023, in __getitem__
  return list(res)
    File 
"/usr/local/lib/python2.7/

[Yahoo-eng-team] [Bug 1363349] [NEW] VMware: test_driver_api...test_snapshot_delete_vm_snapshot* need to be fixed

2014-08-29 Thread Vui Lam
Public bug reported:

once converted to use oslo.vmware, 
these test cases, 
test_driver_api.VMwareAPIVMTestCase.test_snapshot_delete_vm_snapshot* are 
failing:
http://logs.openstack.org/75/70175/43/check/gate-nova-python27/c714fde/console.html

This is most likely related unintended consequence of mocking time.sleep
These tests are currently proposed to be skipped but we should look to provide 
an fix for the test cases as soon as possible.

A separate patch was posted to demonstrate the potential cause. See lone diff 
between patch 1 (which fails the above-mentioned tests) and patch 3 (which 
doesn't)
https://review.openstack.org/#/c/117897/1..3/nova/tests/virt/vmwareapi/test_driver_api.py

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1363349

Title:
  VMware: test_driver_api...test_snapshot_delete_vm_snapshot* need to be
  fixed

Status in OpenStack Compute (Nova):
  New

Bug description:
  once converted to use oslo.vmware, 
  these test cases, 
test_driver_api.VMwareAPIVMTestCase.test_snapshot_delete_vm_snapshot* are 
failing:
  
http://logs.openstack.org/75/70175/43/check/gate-nova-python27/c714fde/console.html

  This is most likely related unintended consequence of mocking time.sleep
  These tests are currently proposed to be skipped but we should look to 
provide an fix for the test cases as soon as possible.

  A separate patch was posted to demonstrate the potential cause. See lone diff 
between patch 1 (which fails the above-mentioned tests) and patch 3 (which 
doesn't)
  
https://review.openstack.org/#/c/117897/1..3/nova/tests/virt/vmwareapi/test_driver_api.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1363349/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363199] Re: Update rpc version in Arista Plugin

2014-08-29 Thread Sukhdev Kapur
Please ignore this. I just noticed that Ahihiro has already updated the
code. So, no action is needed.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1363199

Title:
  Update rpc version in Arista Plugin

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  RPC version in L3 router service plugin and L3 agent has been upgraded
  to 1.3. Update the version in Arista service plugin from 1.2 to 1.3 to
  be compatible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1363199/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp