[Yahoo-eng-team] [Bug 1446421] [NEW] Increasing gap between form-groups

2015-04-20 Thread Shaoquan Chen
Public bug reported:

In new Launch Instance wizard, vertical margin between .form-groups is
zero, causing form field too close.

Steps to reproduce:

- Open new Launch Instance wizard.
- switch to Key Pare step
- click on `Import Key Pair` button, you will see the Public Key field is too 
close to Key Pair Name field.

** Affects: horizon
 Importance: Undecided
 Assignee: Shaoquan Chen (sean-chen2)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1446421

Title:
  Increasing gap between form-groups

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  In new Launch Instance wizard, vertical margin between .form-groups is
  zero, causing form field too close.

  Steps to reproduce:

  - Open new Launch Instance wizard.
  - switch to Key Pare step
  - click on `Import Key Pair` button, you will see the Public Key field is too 
close to Key Pair Name field.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1446421/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298355] Re: Lock wait timeout in update VIP status

2015-04-20 Thread Joe Gordon
The expiration timer hasn't started for some reason.

** Changed in: neutron
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1298355

Title:
  Lock wait timeout in update VIP status

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Please note that this bug is similar to bug 1253822. Possibly included
  in it (but that one is now closed), but surely not a duplicate.

  The root cause is a lock wait timeout occurring while updating VIPs.
  given the nature of load balancing tests, this is non critical in 90% of 
cases, meaning the job succeeds anyway.

  But this does not mean that it's not a bug when the job does not fail.

  Occurences past 7 days: 141 (15 fails)

  logstash queries:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJ0YWdzOlwic2NyZWVuLXEtc3ZjLnR4dFwiIEFORCBtZXNzYWdlOlwiTG9jayB3YWl0IHRpbWVvdXQgZXhjZWVkZWRcIiBBTkQgbWVzc2FnZTpcIlVQREFURSB2aXBzIFNFVCBzdGF0dXNcIiBBTkQgbWVzc2FnZTpcIlJldHVybmluZyBleGNlcHRpb24gKE9wZXJhdGlvbmFsRXJyb3IpXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTU5MjQxMDg0NTksIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1298355/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446428] [NEW] Python-neutronclient command extension does not support parent_id in neutronV20.find_resourceid_by_name_or_id()

2015-04-20 Thread Sayaji Patil
Public bug reported:

neutronV20.find_resourceid_by_name_or_id() when used without the parent_id 
parameter works . But in case we want to pass the parent_id to the method it 
fails
with this error

_fx() takes exactly 0 arguments (3 given)

** Affects: python-neutronclient
 Importance: Undecided
 Assignee: Sayaji Patil (sayaji15)
 Status: New

** Project changed: horizon = python-neutronclient

** Changed in: python-neutronclient
 Assignee: (unassigned) = Sayaji Patil (sayaji15)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1446428

Title:
  Python-neutronclient command extension does not support  parent_id in
  neutronV20.find_resourceid_by_name_or_id()

Status in Python client library for Neutron:
  New

Bug description:
  neutronV20.find_resourceid_by_name_or_id() when used without the parent_id 
  parameter works . But in case we want to pass the parent_id to the method it 
fails
  with this error

  _fx() takes exactly 0 arguments (3 given)

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1446428/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446405] [NEW] Test discovery is broken for the api and functional paths

2015-04-20 Thread Maru Newby
Public bug reported:

The following failures in test discovery were noted:

https://review.openstack.org/#/c/169962/
https://bugs.launchpad.net/neutron/+bug/1443480

It was eventually determined that the use of the unittest discovery
mechanism to perform manual discovery in package init for the api and
functional subtrees was to blame.

** Affects: neutron
 Importance: High
 Assignee: Maru Newby (maru)
 Status: In Progress


** Tags: kilo-backport-potential

** Tags added: kilo-backport-potential

** Changed in: neutron
 Assignee: (unassigned) = Maru Newby (maru)

** Changed in: neutron
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1446405

Title:
  Test discovery is broken for the api and functional paths

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The following failures in test discovery were noted:

  https://review.openstack.org/#/c/169962/
  https://bugs.launchpad.net/neutron/+bug/1443480

  It was eventually determined that the use of the unittest discovery
  mechanism to perform manual discovery in package init for the api and
  functional subtrees was to blame.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1446405/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446376] [NEW] Not Authorized response from neutron causes nova to traceback with AttributeError

2015-04-20 Thread Eoghan Glynn
Public bug reported:

version:  stable/icehouse from 2014.2.2 onwards (does not impact
stable/{juno|kilo})

nova-api fails to handle Not Authorized from neutron:

2015-04-20 00:49:29.399 22741 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
2015-04-20 00:49:29.399 22741 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py, line 1078, in 
dispatch
2015-04-20 00:49:29.399 22741 TRACE nova.api.openstack return 
method(req=request, **action_args)
2015-04-20 00:49:29.399 22741 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/compute/contrib/floating_ips.py,
 line 187, in delete
2015-04-20 00:49:29.399 22741 TRACE nova.api.openstack except 
exception.Forbidden:
2015-04-20 00:49:29.399 22741 TRACE nova.api.openstack AttributeError: 'module' 
object has no attribute 'Forbidden'

due to a bad backport landed in upstream stable/icehouse:

  https://github.com/openstack/nova/commit/4bc680f2

which uses a juno-era exception class.

Turns out this broken patch has been in the last two upstream
stable/icehouse releases (2014.2.3, 2104.2.4).

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1446376

Title:
  Not Authorized response from neutron causes nova to traceback with
  AttributeError

Status in OpenStack Compute (Nova):
  New

Bug description:
  version:  stable/icehouse from 2014.2.2 onwards (does not impact
  stable/{juno|kilo})

  nova-api fails to handle Not Authorized from neutron:

  2015-04-20 00:49:29.399 22741 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
  2015-04-20 00:49:29.399 22741 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py, line 1078, in 
dispatch
  2015-04-20 00:49:29.399 22741 TRACE nova.api.openstack return 
method(req=request, **action_args)
  2015-04-20 00:49:29.399 22741 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/compute/contrib/floating_ips.py,
 line 187, in delete
  2015-04-20 00:49:29.399 22741 TRACE nova.api.openstack except 
exception.Forbidden:
  2015-04-20 00:49:29.399 22741 TRACE nova.api.openstack AttributeError: 
'module' object has no attribute 'Forbidden'

  due to a bad backport landed in upstream stable/icehouse:

https://github.com/openstack/nova/commit/4bc680f2

  which uses a juno-era exception class.

  Turns out this broken patch has been in the last two upstream
  stable/icehouse releases (2014.2.3, 2104.2.4).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1446376/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1407721] Re: Add range support to all address fields in pci_passthrough_whitelist

2015-04-20 Thread Moshe Levi
** Changed in: nova
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1407721

Title:
  Add range support to all address fields in pci_passthrough_whitelist

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  This way a user will  be able to exclude a specific VF.

  Example of func range:
  1 . pci_passthrough_whitelist = 
{address:*:02:00.2-4,physical_network:physnet1}
  2.  pci_passthrough_whitelist 
={address:*:02:00.2-*,physical_network:physnet1}

  Example of slot range:
  1. pci_passthrough_whitelist ={address:*:02:02-04.* 
,physical_network:physnet1}
  2. pci_passthrough_whitelist 
{address:*:02:02-*.*,physical_network:physnet1}

  same will be for bus and domain.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1407721/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1313800] Re: lock wait timeout while updating LB pool

2015-04-20 Thread Joe Gordon
Haven't seen this in a while, marking as invalid

** Changed in: neutron
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1313800

Title:
  lock wait timeout while updating LB pool

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Traceback:
  
http://logs.openstack.org/51/89451/2/check/check-tempest-dsvm-neutron-icehouse/e6faf15/logs/screen-q-svc.txt.gz#_2014-04-28_10_08_36_280

  logstash:
  
http://logstash.openstack.org/#eyJmaWVsZHMiOltdLCJzZWFyY2giOiJtZXNzYWdlOlwiKE9wZXJhdGlvbmFsRXJyb3IpICgxMjA1LCAnTG9jayB3YWl0IHRpbWVvdXQgZXhjZWVkZWQ7IHRyeSByZXN0YXJ0aW5nIHRyYW5zYWN0aW9uJylcIiBBTkQgTk9UIG1lc3NhZ2U6XCJUcmFjZWJhY2sgKG1vc3QgcmVjZW50IGNhbGwgbGFzdFwiIEFORCB0YWdzOlwic2NyZWVuLXEtc3ZjLnR4dFwiIEFORCBtZXNzYWdlOlwiVVBEQVRFIHBvb2xzXCIiLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsIm9mZnNldCI6MCwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTg2OTk1NzAxOTF9

  Occurences in past 7 days: 53 (6% of total lock wait timeout failures,
  build failure rate:35%)

  NOTE: it is not yet clear whether this lock wait timeout occurrence is
  independent from the others already being addressed. This particular
  occurrence is often connected with lock wait timeout in update vips
  (bug 1298355) and update ports (bug 1312964)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1313800/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1313794] Re: lock_wait timeout caused by plug_vip_port

2015-04-20 Thread Joe Gordon
no, we are not seeing this.

** Changed in: neutron
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1313794

Title:
  lock_wait timeout caused by plug_vip_port

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  This bug is an instance of the infamous and dreadful issue with
  evenlet yielding within a transaction and causing a lock wait timeout
  in mysql.

  In this instance the lock wait timeout error occurs because during
  delete_port a yield occurs to another thread which tries to update the
  status for the VIP port locked by delete_port.

  Traceback here: http://logs.openstack.org/57/88057/8/check/check-
  tempest-dsvm-neutron-
  full/ed58255/logs/screen-q-svc.txt.gz#_2014-04-27_21_44_59_865

  Occurences in past 7 days: 231 (100% failure rate, 26% of total lock
  wait timeout errors)

  Logstash details:
  
http://logstash.openstack.org/#eyJmaWVsZHMiOltdLCJzZWFyY2giOiJtZXNzYWdlOlwiKE9wZXJhdGlvbmFsRXJyb3IpICgxMjA1LCAnTG9jayB3YWl0IHRpbWVvdXQgZXhjZWVkZWQ7IHRyeSByZXN0YXJ0aW5nIHRyYW5zYWN0aW9uJylcIiBBTkQgTk9UIG1lc3NhZ2U6XCJUcmFjZWJhY2sgKG1vc3QgcmVjZW50IGNhbGwgbGFzdFwiIEFORCB0YWdzOlwic2NyZWVuLXEtc3ZjLnR4dFwiIEFORCBtZXNzYWdlOlwiVVBEQVRFIHBvcnRzIFNFVCBhZG1pbl9zdGF0ZV91cFwiIiwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJvZmZzZXQiOjAsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzk4Njk5MzEwOTU4fQ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1313794/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446418] [NEW] Redundant validation indicator

2015-04-20 Thread Shaoquan Chen
Public bug reported:

In the new Launch Instance wizard, first step, there are redundant
validation indicators for `Instance Name`. When the field is empty, it
will show a red asterisk AND a warning icon.

** Affects: horizon
 Importance: Undecided
 Status: Invalid

** Changed in: horizon
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1446418

Title:
  Redundant validation indicator

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  In the new Launch Instance wizard, first step, there are redundant
  validation indicators for `Instance Name`. When the field is empty, it
  will show a red asterisk AND a warning icon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1446418/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1423587] Re: tox -egenconfig appears broken on the juno branch

2015-04-20 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) juno because there has been no
activity for 60 days.]

** Changed in: nova/juno
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1423587

Title:
  tox -egenconfig appears broken on the juno branch

Status in OpenStack Compute (Nova):
  Invalid
Status in OpenStack Compute (nova) juno series:
  Expired

Bug description:
  tox -egenconfig is the right way to get sample config files since they
  stopped being checked into the tree. It works for me on master, but
  not on stable/juno.

  Note, despite the statement that it succeeded, it in fact did not.
  That's another bug.

  ubuntu@cf3k:~/openstack/nova$ git checkout stable/juno
  gSwitched to branch 'stable/juno'
  Your branch is up-to-date with 'origin/stable/juno'.
  ubuntu@cf3k:~/openstack/nova$ tox -egenconfig
  genconfig develop-inst-nodeps: /home/ubuntu/openstack/nova
  genconfig runtests: commands[0] | bash tools/config/generate_sample.sh -b . 
-p nova -o etc/nova
  WARNING:test command found but not installed in testenv
cmd: /bin/bash
env: /home/ubuntu/openstack/nova/.tox/genconfig
  Maybe forgot to specify a dependency?
  Error importing module nova.test: No module named mox
  Traceback (most recent call last):
File /usr/lib/python2.7/runpy.py, line 162, in _run_module_as_main
  __main__, fname, loader, pkg_name)
File /usr/lib/python2.7/runpy.py, line 72, in _run_code
  exec code in run_globals
File 
/home/ubuntu/openstack/nova/nova/openstack/common/config/generator.py, line 
307, in module
  main()
File 
/home/ubuntu/openstack/nova/nova/openstack/common/config/generator.py, line 
304, in main
  generate(sys.argv[1:])
File 
/home/ubuntu/openstack/nova/nova/openstack/common/config/generator.py, line 
130, in generate
  raise RuntimeError(Unable to import module %s % mod_str)
  RuntimeError: Unable to import module nova.test
  __ summary 
___
genconfig: commands succeeded
congratulations :)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1423587/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446465] [NEW] test_plug_succeeds failed for _create_namespace

2015-04-20 Thread YAMAMOTO Takashi
Public bug reported:

_create_namespace method was recently removed
(commit 7f7343b1afc0b1b953e5c36a753397a6d37316cb)
but still has a few users.

neutron/tests/functional/agent/linux/test_interface.py:namespace = 
self._create_namespace()
neutron/tests/functional/agent/test_ovs_flows.py:self.src_ns = 
self._create_namespace()
neutron/tests/functional/agent/test_ovs_flows.py:self.dst_ns = 
self._create_namespace()

test_ovs_flows ones are hidden by other bug.
https://bugs.launchpad.net/neutron/+bug/1446456

** Affects: neutron
 Importance: Undecided
 Assignee: YAMAMOTO Takashi (yamamoto)
 Status: In Progress

** Description changed:

- _create_namespace method was recently removed but still has a few users.
+ _create_namespace method was recently removed
+ (commit 7f7343b1afc0b1b953e5c36a753397a6d37316cb)
+ but still has a few users.
  
  neutron/tests/functional/agent/linux/test_interface.py:namespace = 
self._create_namespace()
  neutron/tests/functional/agent/test_ovs_flows.py:self.src_ns = 
self._create_namespace()
  neutron/tests/functional/agent/test_ovs_flows.py:self.dst_ns = 
self._create_namespace()
  
  test_ovs_flows ones are hidden by other bug.
  https://bugs.launchpad.net/neutron/+bug/1446456

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1446465

Title:
  test_plug_succeeds failed for _create_namespace

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  _create_namespace method was recently removed
  (commit 7f7343b1afc0b1b953e5c36a753397a6d37316cb)
  but still has a few users.

  neutron/tests/functional/agent/linux/test_interface.py:namespace = 
self._create_namespace()
  neutron/tests/functional/agent/test_ovs_flows.py:self.src_ns = 
self._create_namespace()
  neutron/tests/functional/agent/test_ovs_flows.py:self.dst_ns = 
self._create_namespace()

  test_ovs_flows ones are hidden by other bug.
  https://bugs.launchpad.net/neutron/+bug/1446456

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1446465/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446449] [NEW] ironic hypervisor resource should be released for booting failed case

2015-04-20 Thread Haomeng,Wang
Public bug reported:


nova boot failed in spawn step, the ironic hypervisor show the
mem/cpu/disk resource still be occpied by the nova instance which is in
error status, I understand for such boot failed case, the ironic
hypervisor resource should be released once the boot is completed with
error.

[root@hbcontrol ~]# nova hypervisor-stats
+--+---+
| Property | Value |
+--+---+
| count| 1 |
| current_workload | 0 |
| disk_available_least | 30|
| free_disk_gb | 0 |
| free_ram_mb  | 0 |
| local_gb | 30|
| local_gb_used| 30|
| memory_mb| 2048  |
| memory_mb_used   | 2048  |
| running_vms  | 1 |
| vcpus| 2 |
| vcpus_used   | 2 |
+--+---+
[root@hbcontrol ~]#

[root@hbcontrol ~]# ironic node-list
+--+--+---+-++-+
| UUID | Name | Instance UUID | Power State | 
Provisioning State | Maintenance |
+--+--+---+-++-+
| ccdce9d8-2f6a-4d7f-8c53-f89f289fd0a1 | None | None  | power on| 
available  | False   |
+--+--+---+-++-+
[root@hbcontrol ~]#


nova compute log

2015-04-21 07:31:51.979 1337 INFO nova.compute.manager 
[req-9abbf97b-8bf0-495a-850e-7874dbb87a22 2b0fbc7394cf4459867f2957e268e2d2 
db9f9ab6aef84239a0206c2bb810b55a - - -] [instance: 
5597c25c-287e-420f-89d3-4f5a211471b8] Starting instance...
2015-04-21 07:31:52.923 1337 INFO nova.compute.claims [-] [instance: 
5597c25c-287e-420f-89d3-4f5a211471b8] Attempting claim: memory 2048 MB, disk 30 
GB
2015-04-21 07:31:52.928 1337 INFO nova.compute.claims [-] [instance: 
5597c25c-287e-420f-89d3-4f5a211471b8] Total memory: 2048 MB, used: 0.00 MB
2015-04-21 07:31:52.928 1337 INFO nova.compute.claims [-] [instance: 
5597c25c-287e-420f-89d3-4f5a211471b8] memory limit: 2048.00 MB, free: 2048.00 MB
2015-04-21 07:31:52.928 1337 INFO nova.compute.claims [-] [instance: 
5597c25c-287e-420f-89d3-4f5a211471b8] Total disk: 30 GB, used: 0.00 GB
2015-04-21 07:31:52.928 1337 INFO nova.compute.claims [-] [instance: 
5597c25c-287e-420f-89d3-4f5a211471b8] disk limit not specified, defaulting to 
unlimited
2015-04-21 07:31:53.002 1337 INFO nova.compute.claims [-] [instance: 
5597c25c-287e-420f-89d3-4f5a211471b8] Claim successful

2015-04-21 07:32:06.331 1337 ERROR nova.servicegroup.drivers.db 
[req-a53a836a-18b5-4da4-9e1a-3ad4a95de788 - - - - -] model server went away
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db Traceback (most 
recent call last):
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db   File 
/usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py, line 112, 
in _report_state
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db 
service.service_ref, state_catalog)
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db   File 
/usr/lib/python2.7/site-packages/nova/conductor/api.py, line 164, in 
service_update
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db return 
self._manager.service_update(context, service, values)
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db   File 
/usr/lib/python2.7/site-packages/nova/conductor/rpcapi.py, line 284, in 
service_update
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db 
service=service_p, values=values)
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py, line 156, in 
call
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db 
retry=self.retry)
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db   File 
/usr/lib/python2.7/site-packages/oslo_messaging/transport.py, line 90, in 
_send
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db 
timeout=timeout, retry=retry)
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py, line 
349, in send
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db retry=retry)
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py, line 
338, in _send
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db result = 
self._waiter.wait(msg_id, timeout)
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py, line 
242, in wait
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db message = 

[Yahoo-eng-team] [Bug 1446286] Re: Exporting a deactivated image returns a generic task failure message

2015-04-20 Thread nikhil komawar
How can this be a bug if export is not even a implemented task yet?

** Changed in: glance
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1446286

Title:
  Exporting a deactivated image returns a generic task failure message

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  Overview:
  When a user attempts to export a deactivated image, the operation fails as 
expected, but the message that is returned explaining why it failed is very 
generic.

  Steps to reproduce:
  1) Import an image as user A
  2) Deactivate the image as admin
  3) Export the image as user A
  4) Notice the export task fails, but with a generic error message

  Expected:
  A more detailed error message as to why the export task failed for a 
deactivated image

  Actual:
  A generic error message reading Unknown error occurred during import export 
is returned.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1446286/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446074] [NEW] FWaaS - Missing tenant_id validation between firewall and firewall_policy in creating/updating firewall

2015-04-20 Thread Yushiro FURUKAWA
Public bug reported:

In creating/updating firewall, it is not implemented tenant_id check in 
validation.
Therefore,  when executing following operation with admin privilege,
the error continues to tracing into neutron's log even the firewall has created.

[Operation]
1. Create firewall-policy(shared=False) in alt_demo tenant.
  $ source devstack/openrc alt_demo alt_demo
2. Change privilege from alt_demo to admin(in demo tenant)
  $ source devstack/openrc admin demo
3. Create firewall using firweall-policy in alt_demo tenant.
  $ neutron firewall-create firewall-policy-in-alt_demo --name my_fw

[Result]
Created a new firewall:
++--+
| Field  | Value|
++--+
| admin_state_up | True |
| description|  |
| firewall_policy_id | 40648e44-2175-4ad7-b190-93179900ac63 |
| id | fff7cbc0-1896-4b6c-8dee-633df68624c2 |
| name   | my_fw|
| router_ids | cab4d01f-053b-4e07-a764-d829e66a3f6e |
| status | PENDING_CREATE   |
| tenant_id  | 65ecf5dfa6f8484f81027d3b25af1dbc |
++--+

[Error log] continues to tracing...
ERROR oslo_messaging.rpc.dispatcher [req-bedc6d68-268d-4be0-8e68-9c14bf659390 
None 65ecf5dfa6f8484f81027d3b25af1dbc] Exception during message handling: 
Firewall Policy 40648e44-2175-4ad7-b190-93179900ac63 could not be found.
TRACE oslo_messaging.rpc.dispatcher Traceback (most recent call last):
TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
142, in _dispatch_and_reply
TRACE oslo_messaging.rpc.dispatcher executor_callback))
TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
186, in _dispatch
TRACE oslo_messaging.rpc.dispatcher executor_callback)
TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
130, in _do_dispatch
TRACE oslo_messaging.rpc.dispatcher result = func(ctxt, **new_args)
TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/neutron-fwaas/neutron_fwaas/services/firewall/fwaas_plugin.py, 
line 85, in get_firewalls_for_tenant
TRACE oslo_messaging.rpc.dispatcher context, fw['id'])
TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/neutron-fwaas/neutron_fwaas/db/firewall/firewall_db.py, line 169, 
in _make_firewall_dict_with_rules
TRACE oslo_messaging.rpc.dispatcher fw_policy = 
self.get_firewall_policy(context, fw_policy_id)
TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/neutron-fwaas/neutron_fwaas/db/firewall/firewall_db.py, line 395, 
in get_firewall_policy
TRACE oslo_messaging.rpc.dispatcher fwp = 
self._get_firewall_policy(context, id)
TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/neutron-fwaas/neutron_fwaas/db/firewall/firewall_db.py, line 103, 
in _get_firewall_policy
TRACE oslo_messaging.rpc.dispatcher raise 
fw_ext.FirewallPolicyNotFound(firewall_policy_id=id)
TRACE oslo_messaging.rpc.dispatcher FirewallPolicyNotFound: Firewall Policy 
40648e44-2175-4ad7-b190-93179900ac63 could not be found.
TRACE oslo_messaging.rpc.dispatcher
ERROR oslo_messaging._drivers.common [req-bedc6d68-268d-4be0-8e68-9c14bf659390 
None 65ecf5dfa6f8484f81027d3b25af1dbc] Returning exception Firewall Policy 
40648e44-2175-4ad7-b190-93179900ac63 could not be found. to caller
ERROR oslo_messaging._drivers.common [req-bedc6d68-268d-4be0-8e68-9c14bf659390 
None 65ecf5dfa6f8484f81027d3b25af1dbc] ['Traceback (most recent call last):\n', 
'  File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
142, in _dispatch_and_reply\nexecutor_callback))\n', '  File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
186, in _dispatch\nexecutor_callback)\n', '  File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
130, in _do_dispatch\nresult = func(ctxt, **new_args)\n', '  File 
/opt/stack/neutron-fwaas/neutron_fwaas/services/firewall/fwaas_plugin.py, 
line 85, in get_firewalls_for_tenant\ncontext, fw[\'id\'])\n', '  File 
/opt/stack/neutron-fwaas/neutron_fwaas/db/firewall/firewall_db.py, line 169, 
in _make_firewall_dict_with_rules\nfw_policy = 
self.get_firewall_policy(context, fw_policy_id)\n', '  File 
/opt/stack/neutron-fwaas/neutron_fwaas/db/firewall/firewall_
 db.py, line 395, in get_firewall_policy\nfwp = 
self._get_firewall_policy(context, id)\n', '  File 
/opt/stack/neutron-fwaas/neutron_fwaas/db/firewall/firewall_db.py, line 103, 
in _get_firewall_policy\nraise 
fw_ext.FirewallPolicyNotFound(firewall_policy_id=id)\n', 
'FirewallPolicyNotFound: Firewall 

[Yahoo-eng-team] [Bug 1383465] Re: [pci-passthrough] nova-compute fails to start

2015-04-20 Thread Nikola Đipanov
** Changed in: nova
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1383465

Title:
  [pci-passthrough] nova-compute fails to start

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Created a guest using nova with a passthrough device, shutdown that
  guest, and disabled nova-compute (openstack-service stop). Went to
  turn things back on, and nova-compute fails to start.

  The trace:
  2014-10-20 16:06:45.734 48553 ERROR nova.openstack.common.threadgroup [-] PCI 
device request ({'requests': 
[InstancePCIRequest(alias_name='rook',count=2,is_new=False,request_id=None,spec=[{product_id='10fb',vendor_id='8086'}])],
 'code': 500}equests)s failed
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
Traceback (most recent call last):
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py, line 
125, in wait
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
x.wait()
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py, line 
47, in wait
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
return self.thread.wait()
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/eventlet/greenthread.py, line 173, in wait
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
return self._exit_event.wait()
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/eventlet/event.py, line 121, in wait
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
return hubs.get_hub().switch()
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py, line 293, in switch
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
return self.greenlet.switch()
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/eventlet/greenthread.py, line 212, in main
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
result = function(*args, **kwargs)
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/openstack/common/service.py, line 492, 
in run_service
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
service.start()
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/service.py, line 181, in start
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
self.manager.pre_start_hook()
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 1152, in 
pre_start_hook
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
self.update_available_resource(nova.context.get_admin_context())
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 5949, in 
update_available_resource
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
rt.update_available_resource(context)
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py, line 332, 
in update_available_resource
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
return self._update_available_resource(context, resources)
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py, line 
272, in inner
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
return f(*args, **kwargs)
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py, line 349, 
in _update_available_resource
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
self._update_usage_from_instances(context, resources, instances)
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py, line 708, 
in _update_usage_from_instances
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
self._update_usage_from_instance(context, resources, instance)
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 

[Yahoo-eng-team] [Bug 1446161] [NEW] Support multiple IPv6 prefixes on internal router ports for an HA Router.

2015-04-20 Thread Sridhar Gaddam
Public bug reported:

As part of BP multiple IPv6 prefixes, we can have multiple IPv6 prefixes on
router internal ports. Patch, I7d4e8194815e626f1cfa267f77a3f2475fdfa3d1, adds
the necessary support for a legacy router.

For an HA router, instead of configuring the addresses on the router internal
ports we should be updating the keepalived config file and let keepalived
configure the addresses depending on the state of the router.

Following are the observations with the current code for an HA router.
1. IPv6 addresses are configured on the router internal ports (i.e., qr-xxx)
   irrespective of the state of the router. As the same IP is configured on
   multiple ports you will notice dadfailed status on the ports.
2. Keepalived configuration is not updated with the new IPv6 addresses.

** Affects: neutron
 Importance: Undecided
 Assignee: Sridhar Gaddam (sridhargaddam)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Sridhar Gaddam (sridhargaddam)

** Changed in: neutron
   Status: New = In Progress

** Description changed:

  As part of BP multiple IPv6 prefixes, we can have multiple IPv6 prefixes on
  router internal ports. Patch, I7d4e8194815e626f1cfa267f77a3f2475fdfa3d1, adds
- the necessary support for a legacy router. 
+ the necessary support for a legacy router.
  
  For an HA router, instead of configuring the addresses on the router internal
  ports we should be updating the keepalived config file and let keepalived
- configure the addresses depending on the state of the router. 
+ configure the addresses depending on the state of the router.
  
  Following are the observations with the current code for an HA router.
  1. IPv6 addresses are configured on the router internal ports (i.e., qr-xxx)
-irrespective of the state of the router. As the same IP is configured on 
multiple
-ports you will notice dadfailed status on the ports.
+    irrespective of the state of the router. As the same IP is configured on
+multiple ports you will notice dadfailed status on the ports.
  2. Keepalived configuration is not updated with the new IPv6 addresses.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1446161

Title:
  Support multiple IPv6 prefixes on internal router ports for an HA
  Router.

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  As part of BP multiple IPv6 prefixes, we can have multiple IPv6 prefixes on
  router internal ports. Patch, I7d4e8194815e626f1cfa267f77a3f2475fdfa3d1, adds
  the necessary support for a legacy router.

  For an HA router, instead of configuring the addresses on the router internal
  ports we should be updating the keepalived config file and let keepalived
  configure the addresses depending on the state of the router.

  Following are the observations with the current code for an HA router.
  1. IPv6 addresses are configured on the router internal ports (i.e., qr-xxx)
     irrespective of the state of the router. As the same IP is configured on
 multiple ports you will notice dadfailed status on the ports.
  2. Keepalived configuration is not updated with the new IPv6 addresses.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1446161/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446078] [NEW] Apache error logs contains non-error messages

2015-04-20 Thread Aakash Soni
Public bug reported:

In the file /var/log/apache2/horizon.error.log there are many lines that
are not errors:

[Fri Apr 17 14:49:32.604840 2015] [:error] [pid 28933:tid 140515448411904] 
Login successful for user admin.
[Fri Apr 17 13:04:25.892490 2015] [:error] [pid 28933:tid 140515448411904] 
Logging out user admin.
[Fri Apr 17 06:08:22.662311 2015] [:error] [pid 28933:tid 140515414841088] 
Creating user with name demo
[Fri Apr 17 14:50:51.395871 2015] [:error] [pid 28933:tid 140515414841088] 
Project switch successful for user admin.
[Thu Apr 16 08:40:23.971582 2015] [:error] [pid 28933:tid 140515473590016] 
Deleted token 9295d3bcfb74cda6703b91a94487072c

All these messages are logged with class 'ERROR'. As these messages are
success messages they must be of type 'INFO'

To reproduce the bug - Login to the dashboard and then check the
horizon.error.log, You'll find the login success message in the log with
class [:error]

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1446078

Title:
   Apache error logs contains non-error messages

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In the file /var/log/apache2/horizon.error.log there are many lines
  that are not errors:

  [Fri Apr 17 14:49:32.604840 2015] [:error] [pid 28933:tid 140515448411904] 
Login successful for user admin.
  [Fri Apr 17 13:04:25.892490 2015] [:error] [pid 28933:tid 140515448411904] 
Logging out user admin.
  [Fri Apr 17 06:08:22.662311 2015] [:error] [pid 28933:tid 140515414841088] 
Creating user with name demo
  [Fri Apr 17 14:50:51.395871 2015] [:error] [pid 28933:tid 140515414841088] 
Project switch successful for user admin.
  [Thu Apr 16 08:40:23.971582 2015] [:error] [pid 28933:tid 140515473590016] 
Deleted token 9295d3bcfb74cda6703b91a94487072c

  All these messages are logged with class 'ERROR'. As these messages
  are success messages they must be of type 'INFO'

  To reproduce the bug - Login to the dashboard and then check the
  horizon.error.log, You'll find the login success message in the log
  with class [:error]

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1446078/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446087] [NEW] Impossible to create port with port_security_enabled = False and security_groups=[]

2015-04-20 Thread Dmitry Ratushnyy
Public bug reported:

Creating port with specifying both port_security_enabled=False and
security_groups=[] raises PortSecurityAndIPRequiredForSecurityGroups

Steps to reproduce:
1) Make sure that port security is enabled for ML2 (for DevStack 
https://review.openstack.org/#/c/162063/)

2) create network with neutron net-create
neutron net-create test
Created a new network:
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| id| 61a8a4be-d607-4683-b636-f1ae08bc135e |
| mtu   | 0|
| name  | test |
| port_security_enabled | True |
| provider:network_type | vxlan|
| provider:physical_network |  |
| provider:segmentation_id  | 1003 |
| router:external   | False|
| shared| False|
| status| ACTIVE   |
| subnets   |  |
| tenant_id | bbd1128dfeb141b3b6442d910cb64dfa |
+---+--+


2) Optional (does not affect result) 
neutron subnet-create 61a8a4be-d607-4683-b636-f1ae08bc135e 10.10.0.1/24
3) Try to create port 
neutron port-create 61a8a4be-d607-4683-b636-f1ae08bc135e --no-securty-groups 
--port-security-enabled=False

4) Result: 
Port security must be enabled and port must have an IP address in order to use 
security groups.

Expected result: Port created with port_security_enabled=False and no
security groups attached to port

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1446087

Title:
  Impossible  to create port with port_security_enabled = False and
  security_groups=[]

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Creating port with specifying both port_security_enabled=False and
  security_groups=[] raises PortSecurityAndIPRequiredForSecurityGroups

  Steps to reproduce:
  1) Make sure that port security is enabled for ML2 (for DevStack 
https://review.openstack.org/#/c/162063/)

  2) create network with neutron net-create
  neutron net-create test
  Created a new network:
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | id| 61a8a4be-d607-4683-b636-f1ae08bc135e |
  | mtu   | 0|
  | name  | test |
  | port_security_enabled | True |
  | provider:network_type | vxlan|
  | provider:physical_network |  |
  | provider:segmentation_id  | 1003 |
  | router:external   | False|
  | shared| False|
  | status| ACTIVE   |
  | subnets   |  |
  | tenant_id | bbd1128dfeb141b3b6442d910cb64dfa |
  +---+--+

  
  2) Optional (does not affect result) 
  neutron subnet-create 61a8a4be-d607-4683-b636-f1ae08bc135e 10.10.0.1/24
  3) Try to create port 
  neutron port-create 61a8a4be-d607-4683-b636-f1ae08bc135e --no-securty-groups 
--port-security-enabled=False

  4) Result: 
  Port security must be enabled and port must have an IP address in order to 
use security groups.

  Expected result: Port created with port_security_enabled=False and no
  security groups attached to port

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1446087/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439928] Re: can create the same type and name of a service with v3 API

2015-04-20 Thread Dolph Mathews
I don't see any reason for this to be a Medium bug - there's
absolutely no negative impact described here. In fact, the documented
behavior is as-designed. To quote myself from the code review above:

  The most obvious conflicting use case I can think of is having two
services of the same type in the same region with different guarantees
around performance, availability, etc. Perhaps those two services are
branded differently (different service names), and incur different
billing behaviors.

Furthermore, if there actually is a user experience issue here, a
deployer could solve it through endpoint filtering (exposing one service
to one subset of users, and exposing the second service to another
subset of users). Otherwise, any bug that could result from this is
either because the user is ambiguously choosing endpoints and / or the
client isn't providing sufficient feedback to cater to the scenario.

** Changed in: keystone
   Status: In Progress = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1439928

Title:
  can create the same type and name of a service with v3 API

Status in OpenStack Identity (Keystone):
  Won't Fix

Bug description:
  i create a service as follows,it can be successful.
  curl -H X-Auth_Token:fc1629a543c64be18937ba8a1296468b -H Content-type: 
application/json -d 
'{service:{description:test_service,name:name_service,type:test_servce}}'
 http://localhost:35357/v3/services |python -mjson.tool
% Total% Received % Xferd  Average Speed   TimeTime Time  
Current
   Dload  Upload   Total   SpentLeft  Speed
  100   325  100   240  10085   1265448 --:--:-- --:--:-- --:--:--  1269
  {
  service: {
  description: test_service,
  enabled: true,
  id: 2d0da8b3d57b4d35a53d4b4a6659b8e4,
  links: {
  self: 
http://localhost:35357/v3/services/2d0da8b3d57b4d35a53d4b4a6659b8e4;
  },
  name: name_service,
  type: test_servce
  }

  when i  create a service with the same command again,it still can be
  successful. the service list is as follows,there are two records with
  the same name and type.

  curl  -H X-Auth_Token:89abc0b308154bb59b5fcc8bd95669f5 
http://localhost:35357/v3/services |python -mjson.tool
% Total% Received % Xferd  Average Speed   TimeTime Time  
Current
   Dload  Upload   Total   SpentLeft  Speed
  100  1920  100  19200 0   7874  0 --:--:-- --:--:-- --:--:--  7901
  {
  links: {
  next: null,
  previous: null,
  self: http://localhost:35357/v3/services;
  },
  services: [
  {
  description: OpenStack Networking,
  enabled: true,
  id: 18c27349d6bf4606a81239164a9be42b,
  links: {
  self: 
http://localhost:35357/v3/services/18c27349d6bf4606a81239164a9be42b;
  },
  name: neutron,
  type: network
  },
  {
  description: test_service,
  enabled: true,
  id: 2d0da8b3d57b4d35a53d4b4a6659b8e4,
  links: {
  self: 
http://localhost:35357/v3/services/2d0da8b3d57b4d35a53d4b4a6659b8e4;
  },
  name: name_service,
  type: test_servce
  },
  
  {
  description: test_service,
  enabled: true,
  id: 9af35c8f07c541d08b7bd4b65a0307da,
  links: {
  self: 
http://localhost:35357/v3/services/9af35c8f07c541d08b7bd4b65a0307da;
  },
  name: name_service,
  type: test_servce
  },
  
  ]
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1439928/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435396] Re: No notifications for role grants using v2

2015-04-20 Thread Thierry Carrez
Agree with fungi

** Changed in: ossa
   Status: Incomplete = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1435396

Title:
  No notifications for role grants using v2

Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  
  If you use the v3 API to add or remove role grants, you get notifications 
that it happened, but if you use the v2.0 API, you don't get notifications.

  Keystone needs to send notifications when the v2 API is used, also.

  For example, start with devstack, then grant a role:

  $ keystone user-role-add --user demo --tenant admin --role admin
 - gets a notification for identity.authenticate, but none for 
identity.created.role_assignment

  Same for revoke:

  $ keystone user-role-remove --user demo --tenant admin --role admin
 - gets a notification for identity.authenticate, but none for 
identity.deleted.role_assignment

  v3 works fine:

  $ curl -X PUT -H X-Auth-Token: $TOKEN
  http://localhost:5000/v3/projects/$PROJECT_ID/users/$USER_ID/roles/$ROLE_ID

  $ curl -X DELETE -H X-Auth-Token: $TOKEN
  http://localhost:5000/v3/projects/$PROJECT_ID/users/$USER_ID/roles/$ROLE_ID

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1435396/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445199] Re: Nova user should not have admin role

2015-04-20 Thread Thierry Carrez
** Changed in: ossa
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1445199

Title:
  Nova user should not have admin role

Status in devstack - openstack dev environments:
  In Progress
Status in OpenStack Compute (Nova):
  New
Status in OpenStack Security Advisories:
  Invalid

Bug description:
  
  Most of the service users are granted the 'service' role on the 'service' 
project, except the 'nova' user which is given 'admin'. The 'nova' user should 
also be given only the 'service' role on the 'service' project.

  This is for security hardening.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1445199/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446253] [NEW] docs need to be updated to follow conventions

2015-04-20 Thread Shilla Saebi
Public bug reported:

In all the horizon docs, the conventions need to be updated per the doc
conventions wiki (see link below). This needs to be done for all the
docs throughout.

one of the pages that needs to be updated:
http://docs.openstack.org/developer/horizon/topics/policy.html

link to the doc conventions:
https://wiki.openstack.org/wiki/Documentation/Conventions

** Affects: horizon
 Importance: Low
 Assignee: Shilla Saebi (shilla-saebi)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1446253

Title:
  docs need to be updated to follow conventions

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  In all the horizon docs, the conventions need to be updated per the
  doc conventions wiki (see link below). This needs to be done for all
  the docs throughout.

  one of the pages that needs to be updated:
  http://docs.openstack.org/developer/horizon/topics/policy.html

  link to the doc conventions:
  https://wiki.openstack.org/wiki/Documentation/Conventions

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1446253/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443798] Re: Restrict netmask of CIDR to avoid DHCP resync

2015-04-20 Thread Thierry Carrez
** Changed in: ossa
   Importance: Undecided = High

** Changed in: ossa
   Status: Incomplete = Confirmed

** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1443798

Title:
  Restrict netmask of CIDR to avoid DHCP resync

Status in OpenStack Neutron (virtual network service):
  In Progress
Status in neutron kilo series:
  New
Status in OpenStack Security Advisories:
  Confirmed

Bug description:
  If any tenant creates a subnet with a netmask of 31 or 32 in IPv4,
  IP addresses of network will fail to be generated, and that
  will cause constant resyncs and neutron-dhcp-agent malfunction.

  [Example operation 1]
   - Create subnet from CLI, with CIDR /31 (CIDR /32 has the same result).

  $ neutron subnet-create net 192.168.0.0/31 --name sub
  Created a new subnet:
  +---+--+
  | Field | Value|
  +---+--+
  | allocation_pools  |  |
  | cidr  | 192.168.0.0/31   |
  | dns_nameservers   |  |
  | enable_dhcp   | True |
  | gateway_ip| 192.168.0.1  |
  | host_routes   |  |
  | id| 42a91f59-1c2d-4e33-9033-4691069c5e4b |
  | ip_version| 4|
  | ipv6_address_mode |  |
  | ipv6_ra_mode  |  |
  | name  | sub  |
  | network_id| 65cc6b46-17ec-41a8-9fe4-5bf93fc25d1e |
  | subnetpool_id |  |
  | tenant_id | 4ffb89e718d346b48fdce2ac61537bce |
  +---+--+

  [Example operation 2]
   - Create subnet from API, with cidr /32 (CIDR /31 has the same result).

  $ curl -i -X POST -H content-type:application/json -d '{subnet: { name: 
badsub, cidr : 192.168.0.0/32, ip_version: 4, network_id: 8
  8143cda-5fe7-45b6-9245-b1e8b75d28d8}}' -H x-auth-token:$TOKEN 
http://192.168.122.130:9696/v2.0/subnets
  HTTP/1.1 201 Created
  Content-Type: application/json; charset=UTF-8
  Content-Length: 410
  X-Openstack-Request-Id: req-4e7e74c0-0190-4a69-a9eb-93d545e8aeef
  Date: Thu, 16 Apr 2015 19:21:20 GMT

  {subnet: {name: badsub, enable_dhcp: true, network_id:
  88143cda-5fe7-45b6-9245-b1e8b75d28d8, tenant_id:
  4ffb89e718d346b48fdce2ac61537bce, dns_nameservers: [],
  gateway_ip: 192.168.0.1, ipv6_ra_mode: null, allocation_pools:
  [], host_routes: [], ip_version: 4, ipv6_address_mode: null,
  cidr: 192.168.0.0/32, id: d210d5fd-8b3b-4c0e-b5ad-
  41798bd47d97, subnetpool_id: null}}

  [Example operation 3]
   - Create subnet from API, with empty allocation_pools.

  $ curl -i -X POST -H content-type:application/json -d '{subnet: { name: 
badsub, cidr : 192.168.0.0/24, allocation_pools: [], ip_version: 4, 
network_id: 88143cda-5fe7-45b6-9245-b1e8b75d28d8}}' -H 
x-auth-token:$TOKEN http://192.168.122.130:9696/v2.0/subnets
  HTTP/1.1 201 Created
  Content-Type: application/json; charset=UTF-8
  Content-Length: 410
  X-Openstack-Request-Id: req-54ce81db-b586-4887-b60b-8776a2ebdb4e
  Date: Thu, 16 Apr 2015 19:18:21 GMT

  {subnet: {name: badsub, enable_dhcp: true, network_id:
  88143cda-5fe7-45b6-9245-b1e8b75d28d8, tenant_id:
  4ffb89e718d346b48fdce2ac61537bce, dns_nameservers: [],
  gateway_ip: 192.168.0.1, ipv6_ra_mode: null, allocation_pools:
  [], host_routes: [], ip_version: 4, ipv6_address_mode: null,
  cidr: 192.168.0.0/24, id: abc2dca4-bf8b-46f5-af1a-
  0a1049309854, subnetpool_id: null}}

  [Trace log]
  2015-04-17 04:23:27.907 16641 DEBUG oslo_messaging._drivers.amqp [-] 
UNIQUE_ID is e0a6a81a005d4aa0b40130506afa0267. _add_unique_id 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:258
  2015-04-17 04:23:27.979 16641 ERROR neutron.agent.dhcp.agent [-] Unable to 
enable dhcp for 88143cda-5fe7-45b6-9245-b1e8b75d28d8.
  2015-04-17 04:23:27.979 16641 TRACE neutron.agent.dhcp.agent Traceback (most 
recent call last):
  2015-04-17 04:23:27.979 16641 TRACE neutron.agent.dhcp.agent   File 
/opt/stack/neutron/neutron/agent/dhcp/agent.py, line 112, in call_driver
  2015-04-17 04:23:27.979 16641 TRACE neutron.agent.dhcp.agent 
getattr(driver, action)(**action_kwargs)
  2015-04-17 04:23:27.979 16641 TRACE neutron.agent.dhcp.agent   File 
/opt/stack/neutron/neutron/agent/linux/dhcp.py, line 201, in enable
  2015-04-17 04:23:27.979 16641 TRACE neutron.agent.dhcp.agent 
interface_name = self.device_manager.setup(self.network)
  2015-04-17 04:23:27.979 16641 TRACE neutron.agent.dhcp.agent   File 

[Yahoo-eng-team] [Bug 1446261] [NEW] gate-neutron-dsvm-functional race fails HA/DVR tests with network namespace not found

2015-04-20 Thread Matt Riedemann
Public bug reported:

http://logs.openstack.org/21/174821/1/gate/gate-neutron-dsvm-
functional/eb6b441/console.html#_2015-04-20_07_41_14_649

This is happening quite often and is not just this specific test:

2015-04-20 07:41:14.649 | 2015-04-20 07:41:14.642 | {4} 
neutron.tests.functional.agent.test_l3_agent.L3HATestFramework.test_ha_router_failover
 [12.678377s] ... FAILED
2015-04-20 07:41:14.650 | 2015-04-20 07:41:14.643 | 
2015-04-20 07:41:14.670 | 2015-04-20 07:41:14.647 | Captured traceback:
2015-04-20 07:41:14.671 | 2015-04-20 07:41:14.649 | ~~~
2015-04-20 07:41:14.671 | 2015-04-20 07:41:14.650 | Traceback (most recent 
call last):
2015-04-20 07:41:14.672 | 2015-04-20 07:41:14.652 |   File 
neutron/tests/functional/agent/test_l3_agent.py, line 762, in 
test_ha_router_failover
2015-04-20 07:41:14.673 | 2015-04-20 07:41:14.653 | 
ha_device.link.set_down()
2015-04-20 07:41:14.673 | 2015-04-20 07:41:14.655 |   File 
neutron/agent/linux/ip_lib.py, line 279, in set_down
2015-04-20 07:41:14.674 | 2015-04-20 07:41:14.658 | self._as_root([], 
('set', self.name, 'down'))
2015-04-20 07:41:14.675 | 2015-04-20 07:41:14.661 |   File 
neutron/agent/linux/ip_lib.py, line 222, in _as_root
2015-04-20 07:41:14.675 | 2015-04-20 07:41:14.663 | 
use_root_namespace=use_root_namespace)
2015-04-20 07:41:14.676 | 2015-04-20 07:41:14.664 |   File 
neutron/agent/linux/ip_lib.py, line 69, in _as_root
2015-04-20 07:41:14.677 | 2015-04-20 07:41:14.666 | 
log_fail_as_error=self.log_fail_as_error)
2015-04-20 07:41:14.677 | 2015-04-20 07:41:14.668 |   File 
neutron/agent/linux/ip_lib.py, line 78, in _execute
2015-04-20 07:41:14.679 | 2015-04-20 07:41:14.672 | 
log_fail_as_error=log_fail_as_error)
2015-04-20 07:41:14.681 | 2015-04-20 07:41:14.675 |   File 
neutron/agent/linux/utils.py, line 137, in execute
2015-04-20 07:41:14.683 | 2015-04-20 07:41:14.676 | raise 
RuntimeError(m)
2015-04-20 07:41:14.684 | 2015-04-20 07:41:14.678 | RuntimeError: 
2015-04-20 07:41:14.686 | 2015-04-20 07:41:14.679 | Command: ['ip', 
'netns', 'exec', 'qrouter-425dfc14-0f4d-45fa-8218-531cae21711f@agent1', 'ip', 
'link', 'set', 'ha-29cbe060-37', 'down']
2015-04-20 07:41:14.688 | 2015-04-20 07:41:14.681 | Exit code: 1
2015-04-20 07:41:14.689 | 2015-04-20 07:41:14.682 | Stdin: 
2015-04-20 07:41:14.691 | 2015-04-20 07:41:14.684 | Stdout: 
2015-04-20 07:41:14.692 | 2015-04-20 07:41:14.685 | Stderr: Cannot open 
network namespace qrouter-425dfc14-0f4d-45fa-8218-531cae21711f@agent1: No 
such file or directory


If you restrict to just the gate job, it's 62 hits in 7 days:

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiU3RkZXJyOiBDYW5ub3Qgb3BlbiBuZXR3b3JrIG5hbWVzcGFjZSBcXFwicXJvdXRlclwiIEFORCBtZXNzYWdlOlwiQGFnZW50MVxcXCI6IE5vIHN1Y2ggZmlsZSBvciBkaXJlY3RvcnlcIiBBTkQgYnVpbGRfbmFtZTpcImdhdGUtbmV1dHJvbi1kc3ZtLWZ1bmN0aW9uYWxcIiBBTkQgdGFnczpcImNvbnNvbGVcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQyOTU0MDU5NTI0NiwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: dvr ha testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1446261

Title:
  gate-neutron-dsvm-functional race fails HA/DVR tests with network
  namespace not found

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  http://logs.openstack.org/21/174821/1/gate/gate-neutron-dsvm-
  functional/eb6b441/console.html#_2015-04-20_07_41_14_649

  This is happening quite often and is not just this specific test:

  2015-04-20 07:41:14.649 | 2015-04-20 07:41:14.642 | {4} 
neutron.tests.functional.agent.test_l3_agent.L3HATestFramework.test_ha_router_failover
 [12.678377s] ... FAILED
  2015-04-20 07:41:14.650 | 2015-04-20 07:41:14.643 | 
  2015-04-20 07:41:14.670 | 2015-04-20 07:41:14.647 | Captured traceback:
  2015-04-20 07:41:14.671 | 2015-04-20 07:41:14.649 | ~~~
  2015-04-20 07:41:14.671 | 2015-04-20 07:41:14.650 | Traceback (most 
recent call last):
  2015-04-20 07:41:14.672 | 2015-04-20 07:41:14.652 |   File 
neutron/tests/functional/agent/test_l3_agent.py, line 762, in 
test_ha_router_failover
  2015-04-20 07:41:14.673 | 2015-04-20 07:41:14.653 | 
ha_device.link.set_down()
  2015-04-20 07:41:14.673 | 2015-04-20 07:41:14.655 |   File 
neutron/agent/linux/ip_lib.py, line 279, in set_down
  2015-04-20 07:41:14.674 | 2015-04-20 07:41:14.658 | self._as_root([], 
('set', self.name, 'down'))
  2015-04-20 07:41:14.675 | 2015-04-20 07:41:14.661 |   File 
neutron/agent/linux/ip_lib.py, line 222, in _as_root
  2015-04-20 07:41:14.675 | 2015-04-20 07:41:14.663 | 
use_root_namespace=use_root_namespace)
  2015-04-20 07:41:14.676 | 

[Yahoo-eng-team] [Bug 1415106] Re: gate-tempest-dsvm-large-ops fails with RPC MessagingTimeout in _try_deallocate_network

2015-04-20 Thread Matt Riedemann
** Changed in: pbr
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1415106

Title:
  gate-tempest-dsvm-large-ops fails with RPC MessagingTimeout in
  _try_deallocate_network

Status in OpenStack Compute (Nova):
  Invalid
Status in Python Build Reasonableness:
  Fix Released

Bug description:
  http://logs.openstack.org/60/147460/7/gate/gate-tempest-dsvm-large-
  ops/12ba360/logs/screen-n-cpu-1.txt.gz?level=TRACE

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiU2V0dGluZyBpbnN0YW5jZSB2bV9zdGF0ZSB0byBFUlJPUlwiIEFORCBtZXNzYWdlOlwiX3RyeV9kZWFsbG9jYXRlX25ldHdvcmtcIiBBTkQgbWVzc2FnZTpcIk1lc3NhZ2luZ1RpbWVvdXRcIiBBTkQgYnVpbGRfbmFtZTpcImdhdGUtdGVtcGVzdC1kc3ZtLWxhcmdlLW9wc1wiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDIyMzc3NTI3MzgwLCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9

  61 hits in 7 days, check and gate, all failures.

  2015-01-27 10:55:34.025 ERROR nova.compute.manager 
[req-34f0ef2f-34f0-4b06-9e30-fd72565a3388 TestLargeOpsScenario-405264936 
TestLargeOpsScenario-221766999] [instance: 
63145652-811e-4124-8ef7-cc64b594336a] Setting instance vm_state to ERROR
  2015-01-27 10:55:34.025 32626 TRACE nova.compute.manager [instance: 
63145652-811e-4124-8ef7-cc64b594336a] Traceback (most recent call last):
  2015-01-27 10:55:34.025 32626 TRACE nova.compute.manager [instance: 
63145652-811e-4124-8ef7-cc64b594336a]   File 
/opt/stack/new/nova/nova/compute/manager.py, line 2569, in 
do_terminate_instance
  2015-01-27 10:55:34.025 32626 TRACE nova.compute.manager [instance: 
63145652-811e-4124-8ef7-cc64b594336a] self._delete_instance(context, 
instance, bdms, quotas)
  2015-01-27 10:55:34.025 32626 TRACE nova.compute.manager [instance: 
63145652-811e-4124-8ef7-cc64b594336a]   File 
/opt/stack/new/nova/nova/hooks.py, line 149, in inner
  2015-01-27 10:55:34.025 32626 TRACE nova.compute.manager [instance: 
63145652-811e-4124-8ef7-cc64b594336a] rv = f(*args, **kwargs)
  2015-01-27 10:55:34.025 32626 TRACE nova.compute.manager [instance: 
63145652-811e-4124-8ef7-cc64b594336a]   File 
/opt/stack/new/nova/nova/compute/manager.py, line 2538, in _delete_instance
  2015-01-27 10:55:34.025 32626 TRACE nova.compute.manager [instance: 
63145652-811e-4124-8ef7-cc64b594336a] quotas.rollback()
  2015-01-27 10:55:34.025 32626 TRACE nova.compute.manager [instance: 
63145652-811e-4124-8ef7-cc64b594336a]   File 
/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py, line 82, in 
__exit__
  2015-01-27 10:55:34.025 32626 TRACE nova.compute.manager [instance: 
63145652-811e-4124-8ef7-cc64b594336a] six.reraise(self.type_, self.value, 
self.tb)
  2015-01-27 10:55:34.025 32626 TRACE nova.compute.manager [instance: 
63145652-811e-4124-8ef7-cc64b594336a]   File 
/opt/stack/new/nova/nova/compute/manager.py, line 2515, in _delete_instance
  2015-01-27 10:55:34.025 32626 TRACE nova.compute.manager [instance: 
63145652-811e-4124-8ef7-cc64b594336a] self._shutdown_instance(context, 
instance, bdms)
  2015-01-27 10:55:34.025 32626 TRACE nova.compute.manager [instance: 
63145652-811e-4124-8ef7-cc64b594336a]   File 
/opt/stack/new/nova/nova/compute/manager.py, line 2449, in _shutdown_instance
  2015-01-27 10:55:34.025 32626 TRACE nova.compute.manager [instance: 
63145652-811e-4124-8ef7-cc64b594336a] self._try_deallocate_network(context, 
instance, requested_networks)
  2015-01-27 10:55:34.025 32626 TRACE nova.compute.manager [instance: 
63145652-811e-4124-8ef7-cc64b594336a]   File 
/opt/stack/new/nova/nova/compute/manager.py, line 2379, in 
_try_deallocate_network
  2015-01-27 10:55:34.025 32626 TRACE nova.compute.manager [instance: 
63145652-811e-4124-8ef7-cc64b594336a] 
self._set_instance_error_state(context, instance)
  2015-01-27 10:55:34.025 32626 TRACE nova.compute.manager [instance: 
63145652-811e-4124-8ef7-cc64b594336a]   File 
/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py, line 82, in 
__exit__
  2015-01-27 10:55:34.025 32626 TRACE nova.compute.manager [instance: 
63145652-811e-4124-8ef7-cc64b594336a] six.reraise(self.type_, self.value, 
self.tb)
  2015-01-27 10:55:34.025 32626 TRACE nova.compute.manager [instance: 
63145652-811e-4124-8ef7-cc64b594336a]   File 
/opt/stack/new/nova/nova/compute/manager.py, line 2374, in 
_try_deallocate_network
  2015-01-27 10:55:34.025 32626 TRACE nova.compute.manager [instance: 
63145652-811e-4124-8ef7-cc64b594336a] self._deallocate_network(context, 
instance, requested_networks)
  2015-01-27 10:55:34.025 32626 TRACE nova.compute.manager [instance: 
63145652-811e-4124-8ef7-cc64b594336a]   File 
/opt/stack/new/nova/nova/compute/manager.py, line 1954, in _deallocate_network
  2015-01-27 10:55:34.025 32626 TRACE nova.compute.manager [instance: 
63145652-811e-4124-8ef7-cc64b594336a] 

[Yahoo-eng-team] [Bug 1446326] [NEW] 403 response from Nova when making a DELETE call for an image in pending_delete

2015-04-20 Thread nikhil komawar
Public bug reported:

Context and information:
--
Currently, 404 is seen by the user when image-delete call is made via the 
Glance API or through the Images API of Nova for an Image in deleted status.

However, if an Image is in pending_delete and a user with the UUID of
that Image tries image-delete call from the Nova API, she gets a back
a 403 which is not consistent. The user should get a 404 back.

Notes:
--
* The user needs to specify the UUID, name is not sufficient.
* For image-show call the user is able to see the Image in DELETED status 
with the appropriate metadata for Image in deleted or pending_delete status 
in Glance as nova passes-in the force_show_deleted=True flag by default.

Feedback needed and action to be taken:
---
Nova should be able to return a 404 back to the user while issuing a 
image-delete call if the Image is flagged deleted in the Glance DB 
(deleted=True), irrespective of the Image status in deleted or 
pending_delete.

** Affects: glance
 Importance: Medium
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: glance
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1446326

Title:
  403 response from Nova when making a DELETE call for an image in
  pending_delete

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  Context and information:
  --
  Currently, 404 is seen by the user when image-delete call is made via the 
Glance API or through the Images API of Nova for an Image in deleted status.

  However, if an Image is in pending_delete and a user with the UUID
  of that Image tries image-delete call from the Nova API, she gets a
  back a 403 which is not consistent. The user should get a 404 back.

  Notes:
  --
  * The user needs to specify the UUID, name is not sufficient.
  * For image-show call the user is able to see the Image in DELETED status 
with the appropriate metadata for Image in deleted or pending_delete status 
in Glance as nova passes-in the force_show_deleted=True flag by default.

  Feedback needed and action to be taken:
  ---
  Nova should be able to return a 404 back to the user while issuing a 
image-delete call if the Image is flagged deleted in the Glance DB 
(deleted=True), irrespective of the Image status in deleted or 
pending_delete.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1446326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446288] [NEW] DVR functional tests leak resources

2015-04-20 Thread Assaf Muller
Public bug reported:

test_dvr_router_add_internal_network_set_arp_cache and
test_dvr_router_rem_fips_on_restarted_agent don't clean up after
themselves when run successfully.

** Affects: neutron
 Importance: Undecided
 Assignee: Assaf Muller (amuller)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Assaf Muller (amuller)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1446288

Title:
  DVR functional tests leak resources

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  test_dvr_router_add_internal_network_set_arp_cache and
  test_dvr_router_rem_fips_on_restarted_agent don't clean up after
  themselves when run successfully.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1446288/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446284] [NEW] functional tests fail non-deterministicly because of full-stack

2015-04-20 Thread John Schwarz
Public bug reported:

On startup, the L3 agent looks for namespaces to clean that don't belong
to him, in order to minimize system resources (namespaces) in the
machine.

The fullstack tests run an l3 agent that after deletes some namespaces
that he doesn't know. This in turns causes the deletion of namespaces
used by the functional tests, causing non-deterministic failures at the
gate.

The code in question which is responsible for the deletion of
namespaces:
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/namespace_manager.py#L73

How to replicate:
1. Run 'tox -e dsvm-functional -- neutron.tests.functional.agent.test_l3_agent 
neutron.tests.fullstack'
2. Some tests are likely to fail
3. ???
4. Profit?

Example of test runs:
1. http://pastebin.com/63n7Y2YK

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1446284

Title:
  functional tests fail non-deterministicly because of full-stack

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  On startup, the L3 agent looks for namespaces to clean that don't
  belong to him, in order to minimize system resources (namespaces) in
  the machine.

  The fullstack tests run an l3 agent that after deletes some namespaces
  that he doesn't know. This in turns causes the deletion of namespaces
  used by the functional tests, causing non-deterministic failures at
  the gate.

  The code in question which is responsible for the deletion of
  namespaces:
  
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/namespace_manager.py#L73

  How to replicate:
  1. Run 'tox -e dsvm-functional -- 
neutron.tests.functional.agent.test_l3_agent neutron.tests.fullstack'
  2. Some tests are likely to fail
  3. ???
  4. Profit?

  Example of test runs:
  1. http://pastebin.com/63n7Y2YK

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1446284/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445335] Re: create/delete flavor permissions should be controlled by policy.json

2015-04-20 Thread Jeremy Stanley
You've switched the status of this bug to indicate an exploitable
security vulnerability. Can you please clarify the conditions under
which this bug can be exploited by a malicious actor, and the extent of
the impact it implies?

** Also affects: ossa
   Importance: Undecided
   Status: New

** Changed in: ossa
   Status: New = Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1445335

Title:
  create/delete flavor permissions should be controlled by policy.json

Status in OpenStack Compute (Nova):
  Confirmed
Status in OpenStack Security Advisories:
  Incomplete

Bug description:
  The create/delete flavor rest api always expects the user to be of
  admin privileges and ignores the rule defined in the nova/policy.json.
  This behavior is observed after these changes 
  https://review.openstack.org/#/c/150352/.

  The expected behavior is that the permissions are controlled as per
  the rule defined in the policy file and should not mandate that only
  an admin should be able to create/delete a flavor

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1445335/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441300] Re: keystone-manage man page updates

2015-04-20 Thread Thierry Carrez
** Also affects: keystone/kilo
   Importance: Undecided
   Status: New

** Changed in: keystone/kilo
Milestone: None = kilo-rc2

** Changed in: keystone/kilo
   Status: New = In Progress

** Changed in: keystone/kilo
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1441300

Title:
  keystone-manage man page updates

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone kilo series:
  In Progress

Bug description:
  
  The keystone-manage man page doesn't show any of the new fernet commands, so 
it's out of date.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1441300/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446286] [NEW] Exporting a deactivated image returns a generic task failure message

2015-04-20 Thread Luke Wollney
Public bug reported:

Overview:
When a user attempts to export a deactivated image, the operation fails as 
expected, but the message that is returned explaining why it failed is very 
generic.

Steps to reproduce:
1) Import an image as user A
2) Deactivate the image as admin
3) Export the image as user A
4) Notice the export task fails, but with a generic error message

Expected:
A more detailed error message as to why the export task failed for a 
deactivated image

Actual:
A generic error message reading Unknown error occurred during import export 
is returned.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1446286

Title:
  Exporting a deactivated image returns a generic task failure message

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Overview:
  When a user attempts to export a deactivated image, the operation fails as 
expected, but the message that is returned explaining why it failed is very 
generic.

  Steps to reproduce:
  1) Import an image as user A
  2) Deactivate the image as admin
  3) Export the image as user A
  4) Notice the export task fails, but with a generic error message

  Expected:
  A more detailed error message as to why the export task failed for a 
deactivated image

  Actual:
  A generic error message reading Unknown error occurred during import export 
is returned.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1446286/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445917] Re: (juno) gate-grenade-dsvm-ironic-sideways failing with NoValidHost due to PortNotUsable

2015-04-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/175219
Committed: 
https://git.openstack.org/cgit/openstack/tempest/commit/?id=92b63aa8211b6a54f47c3ec2f1c77abf15017be8
Submitter: Jenkins
Branch:master

commit 92b63aa8211b6a54f47c3ec2f1c77abf15017be8
Author: Matt Riedemann mrie...@us.ibm.com
Date:   Sun Apr 19 15:45:38 2015 +

Revert Test creation of server attached to created port

This reverts commit 2b34ec3ff444e6e6ec7b3e52832ecd6e8ca20552

This breaks gate-grenade-dsvm-ironic-sideways on
stable/juno because the requested port's mac address is not
in the list of available mac addresses from the ironic
driver.

It's also unclear how useful this is given we already have
the test_preserve_preexisting_port test which is
essentially testing the same scenario, except it's not
run on stable/icehouse or stable/juno since preserving
pre-existing ports in nova wasn't fixed until Kilo.

Change-Id: I24403c1ae734b2137ddee5c3bf5a1594cf5375d8
Closes-Bug: #1445917


** Changed in: tempest
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1445917

Title:
  (juno) gate-grenade-dsvm-ironic-sideways failing with NoValidHost due
  to PortNotUsable

Status in OpenStack Compute (Nova):
  Invalid
Status in Tempest:
  Fix Released

Bug description:
  http://logs.openstack.org/82/169782/1/gate/gate-grenade-dsvm-ironic-
  
sideways/d82f6a1/logs/new/screen-n-cpu.txt.gz?level=TRACE#_2015-04-19_08_36_06_412

  2015-04-19 08:36:06.209 26279 ERROR nova.virt.ironic.driver [-] Error 
preparing deploy for instance 85a4a66e-24b0-48a4-ada6-1d29edee7adb on baremetal 
node f75b4bf9-6e6d-44fb-b4e8-ece394357440.
  2015-04-19 08:36:06.412 26279 ERROR nova.compute.manager [-] [instance: 
85a4a66e-24b0-48a4-ada6-1d29edee7adb] Instance failed to spawn
  2015-04-19 08:36:06.412 26279 TRACE nova.compute.manager [instance: 
85a4a66e-24b0-48a4-ada6-1d29edee7adb] Traceback (most recent call last):
  2015-04-19 08:36:06.412 26279 TRACE nova.compute.manager [instance: 
85a4a66e-24b0-48a4-ada6-1d29edee7adb]   File 
/opt/stack/new/nova/nova/compute/manager.py, line 2261, in _build_resources
  2015-04-19 08:36:06.412 26279 TRACE nova.compute.manager [instance: 
85a4a66e-24b0-48a4-ada6-1d29edee7adb] yield resources
  2015-04-19 08:36:06.412 26279 TRACE nova.compute.manager [instance: 
85a4a66e-24b0-48a4-ada6-1d29edee7adb]   File 
/opt/stack/new/nova/nova/compute/manager.py, line 2131, in 
_build_and_run_instance
  2015-04-19 08:36:06.412 26279 TRACE nova.compute.manager [instance: 
85a4a66e-24b0-48a4-ada6-1d29edee7adb] block_device_info=block_device_info)
  2015-04-19 08:36:06.412 26279 TRACE nova.compute.manager [instance: 
85a4a66e-24b0-48a4-ada6-1d29edee7adb]   File 
/opt/stack/new/nova/nova/virt/ironic/driver.py, line 645, in spawn
  2015-04-19 08:36:06.412 26279 TRACE nova.compute.manager [instance: 
85a4a66e-24b0-48a4-ada6-1d29edee7adb] self._cleanup_deploy(context, node, 
instance, network_info)
  2015-04-19 08:36:06.412 26279 TRACE nova.compute.manager [instance: 
85a4a66e-24b0-48a4-ada6-1d29edee7adb]   File 
/opt/stack/new/nova/nova/openstack/common/excutils.py, line 82, in __exit__
  2015-04-19 08:36:06.412 26279 TRACE nova.compute.manager [instance: 
85a4a66e-24b0-48a4-ada6-1d29edee7adb] six.reraise(self.type_, self.value, 
self.tb)
  2015-04-19 08:36:06.412 26279 TRACE nova.compute.manager [instance: 
85a4a66e-24b0-48a4-ada6-1d29edee7adb]   File 
/opt/stack/new/nova/nova/virt/ironic/driver.py, line 637, in spawn
  2015-04-19 08:36:06.412 26279 TRACE nova.compute.manager [instance: 
85a4a66e-24b0-48a4-ada6-1d29edee7adb] self._plug_vifs(node, instance, 
network_info)
  2015-04-19 08:36:06.412 26279 TRACE nova.compute.manager [instance: 
85a4a66e-24b0-48a4-ada6-1d29edee7adb]   File 
/opt/stack/new/nova/nova/virt/ironic/driver.py, line 900, in _plug_vifs
  2015-04-19 08:36:06.412 26279 TRACE nova.compute.manager [instance: 
85a4a66e-24b0-48a4-ada6-1d29edee7adb] network_info_str = str(network_info)
  2015-04-19 08:36:06.412 26279 TRACE nova.compute.manager [instance: 
85a4a66e-24b0-48a4-ada6-1d29edee7adb]   File 
/opt/stack/new/nova/nova/network/model.py, line 467, in __str__
  2015-04-19 08:36:06.412 26279 TRACE nova.compute.manager [instance: 
85a4a66e-24b0-48a4-ada6-1d29edee7adb] return self._sync_wrapper(fn, *args, 
**kwargs)
  2015-04-19 08:36:06.412 26279 TRACE nova.compute.manager [instance: 
85a4a66e-24b0-48a4-ada6-1d29edee7adb]   File 
/opt/stack/new/nova/nova/network/model.py, line 450, in _sync_wrapper
  2015-04-19 08:36:06.412 26279 TRACE nova.compute.manager [instance: 
85a4a66e-24b0-48a4-ada6-1d29edee7adb] self.wait()
  2015-04-19 08:36:06.412 26279 TRACE nova.compute.manager [instance: 
85a4a66e-24b0-48a4-ada6-1d29edee7adb]   File 
/opt/stack/new/nova/nova/network/model.py, 

[Yahoo-eng-team] [Bug 1446349] [NEW] L3 agent can fail to delete router silently

2015-04-20 Thread Assaf Muller
Public bug reported:

The L3 agent currently tries to delete a router by wrapping the delete
call in a try/except Exception block, but fails to log the caught
exception.

** Affects: neutron
 Importance: Undecided
 Assignee: Assaf Muller (amuller)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Assaf Muller (amuller)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1446349

Title:
  L3 agent can fail to delete router silently

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The L3 agent currently tries to delete a router by wrapping the delete
  call in a try/except Exception block, but fails to log the caught
  exception.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1446349/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp