[Yahoo-eng-team] [Bug 1818317] [NEW] Network filtering by MTU is not supported with the "net-mtu-writable" module

2019-03-01 Thread kay
Public bug reported:

Read-only (https://github.com/openstack/neutron-
lib/blob/fc2a81058bfd3ba9fd3501660156c71ff1c8129c/neutron_lib/api/definitions/network_mtu.py#L51..L52)
MTU supports listing, however writable "net-mtu-writable"
(https://github.com/openstack/neutron-
lib/blob/fc2a81058bfd3ba9fd3501660156c71ff1c8129c/neutron_lib/api/definitions/network_mtu_writable.py#L54..L56)
doesn't support filtering by MTU:

Bad request with: [GET 
http://192.168.200.182:9696/v2.0/networks?mtu=1450&name=TESTACC-ZtUqAVkR]:
{"NeutronError": {"message": "[u'mtu'] is invalid attribute for filtering", 
"type": "HTTPBadRequest", "detail": ""}}

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: mtu network neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1818317

Title:
  Network filtering by MTU is not supported with the "net-mtu-writable"
  module

Status in neutron:
  New

Bug description:
  Read-only (https://github.com/openstack/neutron-
  
lib/blob/fc2a81058bfd3ba9fd3501660156c71ff1c8129c/neutron_lib/api/definitions/network_mtu.py#L51..L52)
  MTU supports listing, however writable "net-mtu-writable"
  (https://github.com/openstack/neutron-
  
lib/blob/fc2a81058bfd3ba9fd3501660156c71ff1c8129c/neutron_lib/api/definitions/network_mtu_writable.py#L54..L56)
  doesn't support filtering by MTU:

  Bad request with: [GET 
http://192.168.200.182:9696/v2.0/networks?mtu=1450&name=TESTACC-ZtUqAVkR]:
  {"NeutronError": {"message": "[u'mtu'] is invalid attribute for filtering", 
"type": "HTTPBadRequest", "detail": ""}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1818317/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818318] [NEW] Network filtering by dns_domain is not supported in the "dns-integration" extension

2019-03-01 Thread kay
Public bug reported:

Filtering network by "dns_domain"
(http://neutron/v2.0/networks?dns_domain=foo) returns:

in_() not yet supported for relationships.  For a simple many-to-one,
use in_() against the set of foreign key values.

However ports filtering by dns_name works fine:

http://neutron/v2.0/ports?dns_name=bar

I haven't managed to clarify why.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: dns extension neutron ports

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1818318

Title:
  Network filtering by dns_domain is not supported in the "dns-
  integration" extension

Status in neutron:
  New

Bug description:
  Filtering network by "dns_domain"
  (http://neutron/v2.0/networks?dns_domain=foo) returns:

  in_() not yet supported for relationships.  For a simple many-to-one,
  use in_() against the set of foreign key values.

  However ports filtering by dns_name works fine:

  http://neutron/v2.0/ports?dns_name=bar

  I haven't managed to clarify why.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1818318/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1816489] Re: Functional test neutron.tests.functional.agent.l3.test_ha_router.LinuxBridgeL3HATestCase. test_ha_router_lifecycle failing

2019-03-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/640400
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=e6351ab11e2cfaea18ab963e32052ad110d2b899
Submitter: Zuul
Branch:master

commit e6351ab11e2cfaea18ab963e32052ad110d2b899
Author: Slawek Kaplonski 
Date:   Fri Mar 1 16:11:59 2019 +0100

[Functional] Don't assert that HA router don't have IPs configured

In functional tests of HA router, in
L3AgentTestFramework._router_lifecycle method there was assertion
that HA router at the beginning don't have IPs configured in
router's namespace.

That could lead to test failure because sometimes keepalived process
switched router from standby to master before this assertion was
done and IPs were already configured.

There is alsmost no value in doing this assertion as it's just after
router was created so it is "normal" that there is no IP addresses
configured yet.
Because of that this patch removes this assertion.

Change-Id: Ib509a7226eb94483a0aaf2d930f329e419b8e135
Closes-Bug: #1816489


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1816489

Title:
  Functional test
  neutron.tests.functional.agent.l3.test_ha_router.LinuxBridgeL3HATestCase.
  test_ha_router_lifecycle failing

Status in neutron:
  Fix Released

Bug description:
  Functional test
  neutron.tests.functional.agent.l3.test_ha_router.LinuxBridgeL3HATestCase.
  test_ha_router_lifecycle is failing from time to time.

  Example of failure: http://logs.openstack.org/68/623268/14/gate
  /neutron-functional-python27/4dc7fb8/logs/testr_results.html.gz

  Logstash query:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22line%2081%2C%20in%20test_ha_router_lifecycle%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1816489/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1804518] Re: Remove obsolete protocol policies from policy.v3cloudsample.json

2019-03-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/625357
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=24b8db9e064713e7350f83cd77ed197b050b1fe1
Submitter: Zuul
Branch:master

commit 24b8db9e064713e7350f83cd77ed197b050b1fe1
Author: Lance Bragstad 
Date:   Fri Dec 14 21:54:42 2018 +

Remove protocol policies from v3cloudsample.json

By incorporating system-scope and default roles, we've effectively
made these policies obsolete. We can simplify what we maintain and
provide a more consistent, unified view of default protocol
behavior by removing them.

Related-Bug: 1806762
Closes-Bug: 1804518
Change-Id: Ia839555d8211596213311c4246135cdae4f46ab2


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1804518

Title:
  Remove obsolete protocol policies from policy.v3cloudsample.json

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Once support for scope types landed in the protocol API policies, the
  policies in policy.v3cloudsample.json became obsolete [0][1].

  We should add formal protection for the policies with enforce_scope =
  True in keystone.tests.unit.protection.v3 and remove the old policies
  from the v3 sample policy file.

  This will reduce confusion by having a true default policy for
  protocols.

  [0] https://review.openstack.org/#/c/526161/
  [1] 
https://git.openstack.org/cgit/openstack/keystone/tree/etc/policy.v3cloudsample.json?id=fb73912d87b61c419a86c0a9415ebdcf1e186927#n204

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1804518/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1817435] Re: Error while stoping radvd on dvr ha router

2019-03-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/640006
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=fe4e7724cde61a036262a392abd1be76013f699c
Submitter: Zuul
Branch:master

commit fe4e7724cde61a036262a392abd1be76013f699c
Author: Slawek Kaplonski 
Date:   Thu Feb 28 12:32:18 2019 +0100

Don't disable radvd if radvd if it wasn't initialized

In some cases on dvr ha router it may happend that
RouterInfo.radvd.disable() will be called even if
radvd DaemonMonitor wasn't initialized earlier and it is
None.
To prevent exception in such case, this patch adds check
if DaemonMonitor is not None to call disable() method on
it.

Change-Id: Ib9b5f4eeae6e4cebcb958928e6521cf1d69b049c
Closes-Bug: #1817435


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1817435

Title:
  Error while stoping radvd on dvr ha router

Status in neutron:
  Fix Released

Bug description:
  In neutron-tempest-dvr-multinode-ha-full I found errors in L3 agent's logs. 
Error example is on:
  
http://logs.openstack.org/79/633979/11/check/neutron-tempest-dvr-ha-multinode-full/415f056/controller/logs/screen-q-l3.txt.gz?level=ERROR#_Feb_22_12_00_34_232612

  Error looks like:

  Feb 22 12:00:34.219810 ubuntu-bionic-rax-ord-0002911263 
neutron-l3-agent[21802]: ERROR neutron.agent.l3.agent Exception: Unable to 
process HA router 1f90a750-d3af-454a-9031-29f399cf6457 without HA port
  Feb 22 12:00:34.219810 ubuntu-bionic-rax-ord-0002911263 
neutron-l3-agent[21802]: ERROR neutron.agent.l3.agent 
  Feb 22 12:00:34.232612 ubuntu-bionic-rax-ord-0002911263 
neutron-l3-agent[21802]: ERROR neutron.agent.l3.agent [-] Error while deleting 
router 1f90a750-d3af-454a-9031-29f399cf6457: AttributeError: 'NoneType' object 
has no attribute 'disable'
  Feb 22 12:00:34.232612 ubuntu-bionic-rax-ord-0002911263 
neutron-l3-agent[21802]: ERROR neutron.agent.l3.agent Traceback (most recent 
call last):
  Feb 22 12:00:34.232612 ubuntu-bionic-rax-ord-0002911263 
neutron-l3-agent[21802]: ERROR neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/agent.py", line 447, in _router_added
  Feb 22 12:00:34.232612 ubuntu-bionic-rax-ord-0002911263 
neutron-l3-agent[21802]: ERROR neutron.agent.l3.agent ri.delete()
  Feb 22 12:00:34.232612 ubuntu-bionic-rax-ord-0002911263 
neutron-l3-agent[21802]: ERROR neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/ha_router.py", line 461, in delete
  Feb 22 12:00:34.232612 ubuntu-bionic-rax-ord-0002911263 
neutron-l3-agent[21802]: ERROR neutron.agent.l3.agent super(HaRouter, 
self).delete()
  Feb 22 12:00:34.232612 ubuntu-bionic-rax-ord-0002911263 
neutron-l3-agent[21802]: ERROR neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/router_info.py", line 431, in delete
  Feb 22 12:00:34.232612 ubuntu-bionic-rax-ord-0002911263 
neutron-l3-agent[21802]: ERROR neutron.agent.l3.agent self.disable_radvd()
  Feb 22 12:00:34.232612 ubuntu-bionic-rax-ord-0002911263 
neutron-l3-agent[21802]: ERROR neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/router_info.py", line 547, in disable_radvd
  Feb 22 12:00:34.232612 ubuntu-bionic-rax-ord-0002911263 
neutron-l3-agent[21802]: ERROR neutron.agent.l3.agent self.radvd.disable()
  Feb 22 12:00:34.232612 ubuntu-bionic-rax-ord-0002911263 
neutron-l3-agent[21802]: ERROR neutron.agent.l3.agent AttributeError: 
'NoneType' object has no attribute 'disable'
  Feb 22 12:00:34.232612 ubuntu-bionic-rax-ord-0002911263 
neutron-l3-agent[21802]: ERROR neutron.agent.l3.agent 
  Feb 22 12:00:34.235509 ubuntu-bionic-rax-ord-0002911263 
neutron-l3-agent[21802]: ERROR neutron.agent.l3.agent [-] Failed to process 
compatible router: 1f90a750-d3af-454a-9031-29f399cf6457: Exception: Unable to 
process HA router 1f90a750-d3af-454a-9031-29f399cf6457 without HA port
  Feb 22 12:00:34.235509 ubuntu-bionic-rax-ord-0002911263 
neutron-l3-agent[21802]: ERROR neutron.agent.l3.agent Traceback (most recent 
call last):
  Feb 22 12:00:34.235509 ubuntu-bionic-rax-ord-0002911263 
neutron-l3-agent[21802]: ERROR neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/agent.py", line 687, in 
_process_routers_if_compatible
  Feb 22 12:00:34.235509 ubuntu-bionic-rax-ord-0002911263 
neutron-l3-agent[21802]: ERROR neutron.agent.l3.agent 
self._process_router_if_compatible(router)
  Feb 22 12:00:34.235509 ubuntu-bionic-rax-ord-0002911263 
neutron-l3-agent[21802]: ERROR neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/agent.py", line 562, in 
_process_router_if_compatible
  Feb 22 12:00:34.235509 ubuntu-bionic-rax-ord-0002911263 
neutron-l3-agent[21802]: ERROR neutron.agent.l3.agent 
self._process_added_router(router)
  Feb 22 12:00:34.235509 ubuntu-bionic-rax-ord-0002911263 
neutron-l3-agent[21802]: 

[Yahoo-eng-team] [Bug 1815791] Re: Race condition causes Nova to shut off a successfully deployed baremetal server

2019-03-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/636699
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=19cb8280232fd3b0baa475d061ea9fb10e1a
Submitter: Zuul
Branch:master

commit 19cb8280232fd3b0baa475d061ea9fb10e1a
Author: Jim Rollenhagen 
Date:   Wed Feb 13 12:59:53 2019 -0500

ironic: check fresh data when sync_power_state doesn't line up

We return cached data to sync_power_state to avoid pummeling the ironic
API. However, this can lead to a race condition where an instance is
powered on, but nova thinks it should be off and calls stop(). Check
again without the cache when this happens to make sure we don't
unnecessarily kill an instance.

Closes-Bug: #1815791
Change-Id: I907b69eb689cf6c169a4869cfc7889308ca419d5


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1815791

Title:
  Race condition causes Nova to shut off a successfully deployed
  baremetal server

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When booting a baremetal server with Nova, we see Ironic report a
  successful power on:

ironic-conductor.log:2019-02-13 10:52:15.901 7 INFO
  ironic.conductor.utils [req-774350ce-9392-4096-b66c-20ad3d794e4e
  7a9b1ac45e084e7cbeb9cb740ffe8d08 41ea8af8d00e46438c7be3b182bbb53f -
  default default] Successfully set node a00696d5-32ba-
  475e-9528-59bf11cffea6 power state to power on by power on.

  But seconds later, Nova (a) triggers a power state sync and then (b)
  decided the node is in state "power off" and shuts it down:

nova-compute.log:2019-02-13 10:52:17.289 7 DEBUG nova.compute.manager 
[req-9bcae7d4-4201-40ea-a66c-c5954117f0e4 - - - - -] Triggering sync for uuid 
dcb4f055-cda4-4d61-ba8f-976645c4e92a _sync_power_states 
/usr/lib/python2.7/site-packages/nova/compute/manager.py:7516
nova-compute.log:2019-02-13 10:52:17.295 7 DEBUG 
oslo_concurrency.lockutils [-] Lock "dcb4f055-cda4-4d61-ba8f-976645c4e92a" 
acquired by "nova.compute.manager.query_driver_power_state_and_sync" :: waited 
0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:327
nova-compute.log:2019-02-13 10:52:17.344 7 WARNING nova.compute.manager 
[-] [instance: dcb4f055-cda4-4d61-ba8f-976645c4e92a] Instance shutdown by 
itself. Calling the stop API. Current vm_state: active, current task_state: 
None, original DB power_state: 4, current VM power_state: 4
nova-compute.log:2019-02-13 10:52:17.345 7 DEBUG nova.compute.api [-] 
[instance: dcb4f055-cda4-4d61-ba8f-976645c4e92a] Going to try to stop instance 
force_stop /usr/lib/python2.7/site-packages/nova/compute/api.py:2291

  It looks like Nova is using stale cache data to make this decision.

  jroll on irc suggests a solution may look like
  https://review.openstack.org/#/c/636699/ (bypass cache data to verify
  power state before shutting down the server).

  This is with nova @ ad842aa and ironic @ 4404292.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1815791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818310] [NEW] api-ref: block_device_mapping_v2.volume_size does not mention when it's required

2019-03-01 Thread Matt Riedemann
Public bug reported:

As seen in this failed tempest test:

http://logs.openstack.org/50/332050/18/check/tempest-
full/f146d24/controller/logs/screen-n-api.txt.gz#_Mar_01_21_57_53_623712

Mar 01 21:57:53.536297 ubuntu-bionic-rax-dfw-0003242602 
devstack@n-api.service[3071]: DEBUG nova.compute.api [None 
req-3ba6b743-aa11-468f-a524-dc19ceb16b4e 
tempest-DetachAttachBootVolumeShelveTestJSON-1339257189 
tempest-DetachAttachBootVolumeShelveTestJSON-1339257189] [instance: 
ba7510aa-3f20-4849-a5e4-5f582a888d6e] block_device_mapping 
[BlockDeviceMapping(attachment_id=,boot_index=0,connection_info=None,created_at=,delete_on_termination=False,deleted=,deleted_at=,destination_type='volume',device_name=None,device_type=None,disk_bus=None,guest_format=None,id=,image_id='83216c87-cd74-451f-b814-7a0baa0a51c7',instance=,instance_uuid=,no_device=False,snapshot_id=None,source_type='image',tag='root',updated_at=,uuid=,volume_id=None,volume_size=None,volume_type=None)]
 {{(pid=3072) _bdm_validate_set_size_and_instance 
/opt/stack/nova/nova/compute/api.py:1347}}
Mar 01 21:57:53.623712 ubuntu-bionic-rax-dfw-0003242602 
devstack@n-api.service[3071]: INFO nova.api.openstack.wsgi [None 
req-3ba6b743-aa11-468f-a524-dc19ceb16b4e 
tempest-DetachAttachBootVolumeShelveTestJSON-1339257189 
tempest-DetachAttachBootVolumeShelveTestJSON-1339257189] HTTP exception thrown: 
Images with destination_type 'volume' need to have a non-zero size specified

Trying to boot from volume with source_type='image' and
destination_type='volume' requires the volume_size to be specified:

https://github.com/openstack/nova/blob/5a09c81af3b438ecbcf27fa653095ff55abb3ed4/nova/compute/api.py#L1461

But that's not mentioned in the API reference which just says (for
volume_size):

"The size of the volume (in GiB). This is integer value from range 1 to
2147483647 which can be requested as integer and string."

** Affects: nova
 Importance: Low
 Status: Triaged


** Tags: api-ref low-hanging-fruit volumes

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1818310

Title:
  api-ref: block_device_mapping_v2.volume_size does not mention when
  it's required

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  As seen in this failed tempest test:

  http://logs.openstack.org/50/332050/18/check/tempest-
  full/f146d24/controller/logs/screen-n-api.txt.gz#_Mar_01_21_57_53_623712

  Mar 01 21:57:53.536297 ubuntu-bionic-rax-dfw-0003242602 
devstack@n-api.service[3071]: DEBUG nova.compute.api [None 
req-3ba6b743-aa11-468f-a524-dc19ceb16b4e 
tempest-DetachAttachBootVolumeShelveTestJSON-1339257189 
tempest-DetachAttachBootVolumeShelveTestJSON-1339257189] [instance: 
ba7510aa-3f20-4849-a5e4-5f582a888d6e] block_device_mapping 
[BlockDeviceMapping(attachment_id=,boot_index=0,connection_info=None,created_at=,delete_on_termination=False,deleted=,deleted_at=,destination_type='volume',device_name=None,device_type=None,disk_bus=None,guest_format=None,id=,image_id='83216c87-cd74-451f-b814-7a0baa0a51c7',instance=,instance_uuid=,no_device=False,snapshot_id=None,source_type='image',tag='root',updated_at=,uuid=,volume_id=None,volume_size=None,volume_type=None)]
 {{(pid=3072) _bdm_validate_set_size_and_instance 
/opt/stack/nova/nova/compute/api.py:1347}}
  Mar 01 21:57:53.623712 ubuntu-bionic-rax-dfw-0003242602 
devstack@n-api.service[3071]: INFO nova.api.openstack.wsgi [None 
req-3ba6b743-aa11-468f-a524-dc19ceb16b4e 
tempest-DetachAttachBootVolumeShelveTestJSON-1339257189 
tempest-DetachAttachBootVolumeShelveTestJSON-1339257189] HTTP exception thrown: 
Images with destination_type 'volume' need to have a non-zero size specified

  Trying to boot from volume with source_type='image' and
  destination_type='volume' requires the volume_size to be specified:

  
https://github.com/openstack/nova/blob/5a09c81af3b438ecbcf27fa653095ff55abb3ed4/nova/compute/api.py#L1461

  But that's not mentioned in the API reference which just says (for
  volume_size):

  "The size of the volume (in GiB). This is integer value from range 1
  to 2147483647 which can be requested as integer and string."

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1818310/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1816833] Fix merged to keystone (master)

2019-03-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/638310
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=b35fb58ea5bd722ba5a0fe415a217b10a9041727
Submitter: Zuul
Branch:master

commit b35fb58ea5bd722ba5a0fe415a217b10a9041727
Author: Lance Bragstad 
Date:   Wed Feb 20 18:19:05 2019 +

Add role assignment test coverage for system members

This commit adds role assignment test coverage for users who have the
member role assigned on the system.

Subsequent patches will:

  - add test coverage for system admins
  - add functionality for domain readers
  - add functionality for domain members
  - add functionality for domain admins
  - add functionality for project readers
  - add functionality for project members
  - add functionality for project admins
  - remove the obsolete policies from policy.v3cloudsample.json

Change-Id: Ie5333bf61a704d4167004457ec1d9b19b4bb01e8
Partial-Bug: 1750673
Partial-Bug: 1816833


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1816833

Title:
  Role assignment API doesn't use default roles

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  In Rocky, keystone implemented support to ensure at least three
  default roles were available [0]. The role assignment API
  (/v3/role_assignments) doesn't incorporate these defaults into its
  default policies [1], but it should.

  [0] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/define-default-roles.html
  [1] 
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/policies/role_assignment.py?id=fb73912d87b61c419a86c0a9415ebdcf1e186927

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1816833/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818302] [NEW] Need to keep apply_network_config=false in azure data source and make it configurable through cloud-init

2019-03-01 Thread Hariharan Jayaraman
Public bug reported:

Cloud Provider: Azure
Image: Ubuntu 18.04 LTS

New release of 18.04 LTS on Azure has apply_network_config=true, this
forces cloud-init to use metadata to get all the IP address. If a
virtual machine as a lot of nics with secondary IP addresses by the time
IPs are configured, the azure data source ends up using secondary CA to
talk to 168.* address which is not supported. All communications must
use Primary CA. Due to this high percentage of provisioning failures
occurs on Ubuntu 18.04 for VMs with multiple Nics and secondary
addresses.


019-02-05 00:26:53,652 - azure.py[DEBUG]: Azure endpoint found at 168.63.129.16
2019-02-05 00:26:53,652 - url_helper.py[DEBUG]: [0/11] open 
'http://168.63.129.16/machine/?comp=goalstate' with {'url': 
'http://168.63.129.16/machine/?comp=goalstate', 'allow_redirects': True, 
'method': 'GET', 'timeout': 5.0, 'headers': {'User-Agent': 
'Cloud-Init/18.4-0ubuntu1~18.04.1', 'x-ms-agent-name': 'WALinuxAgent', 
'x-ms-version': '2012-11-30'}} configuration
2019-02-05 00:26:58,660 - url_helper.py[DEBUG]: Please wait 1 seconds while we 
wait to try again
2019-02-05 00:26:59,662 - url_helper.py[DEBUG]: [1/11] open 
'http://168.63.129.16/machine/?comp=goalstate' with {'url': 
'http://168.63.129.16/machine/?comp=goalstate', 'allow_redirects': True, 
'method': 'GET', 'timeout': 5.0, 'headers': {'User-Agent': 
'Cloud-Init/18.4-0ubuntu1~18.04.1', 'x-ms-agent-name': 'WALinuxAgent', 
'x-ms-version': '2012-11-30'}} configuration
2019-02-05 00:27:04,668 - url_helper.py[DEBUG]: Please wait 1 seconds while we 
wait to try again
2019-02-05 00:27:05,670 - url_helper.py[DEBUG]: [2/11] open 
'http://168.63.129.16/machine/?comp=goalstate' with {'url': 
'http://168.63.129.16/machine/?comp=goalstate', 'allow_redirects': True, 
'method': 'GET', 'timeout': 5.0, 'headers': {'User-Agent': 
'Cloud-Init/18.4-0ubuntu1~18.04.1', 'x-ms-agent-name': 'WALinuxAgent', 
'x-ms-version': '2012-11-30'}} configuration
2019-02-05 00:27:10,676 - url_helper.py[DEBUG]: Please wait 1 seconds while we 
wait to try again

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1818302

Title:
  Need to keep apply_network_config=false in azure data source and make
  it configurable through cloud-init

Status in cloud-init:
  New

Bug description:
  Cloud Provider: Azure
  Image: Ubuntu 18.04 LTS

  New release of 18.04 LTS on Azure has apply_network_config=true, this
  forces cloud-init to use metadata to get all the IP address. If a
  virtual machine as a lot of nics with secondary IP addresses by the
  time IPs are configured, the azure data source ends up using secondary
  CA to talk to 168.* address which is not supported. All communications
  must use Primary CA. Due to this high percentage of provisioning
  failures occurs on Ubuntu 18.04 for VMs with multiple Nics and
  secondary addresses.

  
  019-02-05 00:26:53,652 - azure.py[DEBUG]: Azure endpoint found at 
168.63.129.16
  2019-02-05 00:26:53,652 - url_helper.py[DEBUG]: [0/11] open 
'http://168.63.129.16/machine/?comp=goalstate' with {'url': 
'http://168.63.129.16/machine/?comp=goalstate', 'allow_redirects': True, 
'method': 'GET', 'timeout': 5.0, 'headers': {'User-Agent': 
'Cloud-Init/18.4-0ubuntu1~18.04.1', 'x-ms-agent-name': 'WALinuxAgent', 
'x-ms-version': '2012-11-30'}} configuration
  2019-02-05 00:26:58,660 - url_helper.py[DEBUG]: Please wait 1 seconds while 
we wait to try again
  2019-02-05 00:26:59,662 - url_helper.py[DEBUG]: [1/11] open 
'http://168.63.129.16/machine/?comp=goalstate' with {'url': 
'http://168.63.129.16/machine/?comp=goalstate', 'allow_redirects': True, 
'method': 'GET', 'timeout': 5.0, 'headers': {'User-Agent': 
'Cloud-Init/18.4-0ubuntu1~18.04.1', 'x-ms-agent-name': 'WALinuxAgent', 
'x-ms-version': '2012-11-30'}} configuration
  2019-02-05 00:27:04,668 - url_helper.py[DEBUG]: Please wait 1 seconds while 
we wait to try again
  2019-02-05 00:27:05,670 - url_helper.py[DEBUG]: [2/11] open 
'http://168.63.129.16/machine/?comp=goalstate' with {'url': 
'http://168.63.129.16/machine/?comp=goalstate', 'allow_redirects': True, 
'method': 'GET', 'timeout': 5.0, 'headers': {'User-Agent': 
'Cloud-Init/18.4-0ubuntu1~18.04.1', 'x-ms-agent-name': 'WALinuxAgent', 
'x-ms-version': '2012-11-30'}} configuration
  2019-02-05 00:27:10,676 - url_helper.py[DEBUG]: Please wait 1 seconds while 
we wait to try again

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1818302/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815361] Re: [RFE] Add support for binding activate and deactivate in sriov mechanism driver

2019-03-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/620123
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=a05e1369b0f440f24a76f395a27e0bdac33ecd0c
Submitter: Zuul
Branch:master

commit a05e1369b0f440f24a76f395a27e0bdac33ecd0c
Author: Adrian Chiris 
Date:   Sun Nov 4 20:37:02 2018 +0200

Add support for binding activate and deactivate

Add support for binding activate and deactivate callbacks
in SR-IOV mechanism driver agent.

this work item is required for supporting VM live-migration
with SR-IOV in nova with libvirt as hypervisor.

This commits implemets the following:
- Implement binding_activate() and binding_deactivate() methods in agent.
- Configure agent to listen on the relevant topics.
- RPC version bump on agend side.
- When processing an activated binding treat it as a newly added port.
- Deactivated binding will trigger a NO-OP as once a binding is
  deactivated it will be removed by Hypervisor.

Closes-Bug: #1815361

Change-Id: I6b7195e08ed8932cfc2785b66627de2854ead85d


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1815361

Title:
  [RFE] Add support for binding activate and deactivate in sriov
  mechanism driver

Status in neutron:
  Fix Released

Bug description:
  this RFE is required to complete the following Nova blueprint:
  
https://blueprints.launchpad.net/nova/+spec/libvirt-neutron-sriov-livemigration

  Spec: https://github.com/openstack/nova-
  specs/blob/master/specs/stein/approved/libvirt-neutron-sriov-
  livemigration.rst

  As the change builds on multiple port binding feature, its required
  that sriovnicswitch agent to support binding_activate() and
  binding_deactivate() callbacks.

  
  The proposed changes to sriovnicswitch agent are:

  - Implement binding_activate() and binding_deactivate() methods in agent.
  - Configure agent to listen on the relevant topics.
  - RPC version bump on agend side.
  - When processing an activated binding treat it as a newly added port.
  - Deactivated binding will trigger a NO-OP as once a binding is
deactivated it will be removed by Hypervisor.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1815361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1817963] Re: API reference tells users to not create servers with availability_zone "nova" but the server create samples use "nova" for the AZ :(

2019-03-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/639874
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=1241e3ec2a93b94d019ebaed8d5086d4622e6cc4
Submitter: Zuul
Branch:master

commit 1241e3ec2a93b94d019ebaed8d5086d4622e6cc4
Author: Matt Riedemann 
Date:   Wed Feb 27 19:53:34 2019 -0500

Stop using "nova" in API samples when creating a server

The "availability_zone" parameter for server create in the
API reference and the availabilty zone user docs both say
that users should not use the default availability zone (nova)
yet our server create API samples use "nova" which is...bad.

This change fixes the API samples and related tests to use
a fake "us-west" availability zone. For any samples that were
requesting an AZ when creating a server, those are changed from
requesting "nova" to requesting "us-west" and a new
AvailabilityZoneFixture is added to stub out the code used to
validate the requested AZ and what is shown in server detail
responses.

Some unused samples are removed from the os-availability-zone
directory and the API reference and AZ user docs are updated for
formatting and linking to other docs for reference.

Change-Id: I3161157f15f05a3ffaaf1b48e7beb6b3e59c5513
Closes-Bug: #1817963


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1817963

Title:
  API reference tells users to not create servers with availability_zone
  "nova" but the server create samples use "nova" for the AZ :(

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  https://developer.openstack.org/api-ref/compute/?expanded=create-
  server-detail#create-server

  From the "availability_zone" parameter description:

  "You can list the available availability zones by calling the os-
  availability-zone API, but you should avoid using the default
  availability zone when booting the instance. In general, the default
  availability zone is named nova. This AZ is only shown when listing
  the availability zones as an admin."

  And the user docs on AZs:

  https://docs.openstack.org/nova/latest/user/aggregates.html
  #availability-zones-azs

  Yet the 2.1 and 2.63 samples use:

  "availability_zone": "nova",

  The API samples should be updated to match the warning in the
  parameter description.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1817963/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818295] [NEW] Only Ironic public endpoint is supported

2019-03-01 Thread Guang Yee
Public bug reported:

Currently, there are number of places in Ironic that does endpoint lookup from 
the Keystone service catalog. By default, keystoneauth set it to 'public' if 
not specified.
Description
===
We are supposed to be able to select the endpoint type by specify either the 
'interface' or 'valid_interfaces' option in the [keystone_authtoken] section in 
nova.conf. But that parameter is not being conveyed in ironicclient.

Consequently, this makes it impossible to using Ironic without having to
expose the public endpoint in the service catalog. Furthermore, for
security reasons, our controller nodes (subnet) have no route to the
public network and therefore will not be able to access the public
endpoint. This is a rather significant limitation in deploying Ironic.
Also, we seem to have broken backward compatibility as well as Ironic
use to work in Pike without having to configure a public endpoint.

Steps to reproduce
==
1) enable Ironic in devstack
2) delete the Ironic public endpoint in Keystone
3) set 'valid_interfaces = internal' in the [ironic] section in nova.conf and 
restart nova-compute service
4) try to provision a server and it will fail with errors similar to these in 
nova-compute logs

2019-02-28 18:00:28.136 48891 ERROR nova.virt.ironic.driver [req-
4bace607-0ab6-45b5-911b-1df5fbcc0e01 None None] An unknown error has
occurred when trying to get the list of nodes from the Ironic inventory.
Error: Must provide Keystone credentials or user-defined endpoint, error
was: publicURL endpoint for baremetal service not found:
AmbiguousAuthSystem: Must provide Keystone credentials or user-defined
endpoint, error was: publicURL endpoint for baremetal service not found

Expected result
===
Server created without error.


Actual result
=
Server failed to create, with errors similar to these in nova-compute logs

2019-02-28 18:00:28.136 48891 ERROR nova.virt.ironic.driver [req-
4bace607-0ab6-45b5-911b-1df5fbcc0e01 None None] An unknown error has
occurred when trying to get the list of nodes from the Ironic inventory.
Error: Must provide Keystone credentials or user-defined endpoint, error
was: publicURL endpoint for baremetal service not found:
AmbiguousAuthSystem: Must provide Keystone credentials or user-defined
endpoint, error was: publicURL endpoint for baremetal service not found

Environment
===
This bug is reproducible in devstack with Ironic plugin enabled.


Related bugs:

Ironic: https://storyboard.openstack.org/#!/story/2005118
Nova: https://bugs.launchpad.net/nova/+bug/1707860

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1818295

Title:
  Only Ironic public endpoint is supported

Status in OpenStack Compute (nova):
  New

Bug description:
  Currently, there are number of places in Ironic that does endpoint lookup 
from the Keystone service catalog. By default, keystoneauth set it to 'public' 
if not specified.
  Description
  ===
  We are supposed to be able to select the endpoint type by specify either the 
'interface' or 'valid_interfaces' option in the [keystone_authtoken] section in 
nova.conf. But that parameter is not being conveyed in ironicclient.

  Consequently, this makes it impossible to using Ironic without having
  to expose the public endpoint in the service catalog. Furthermore, for
  security reasons, our controller nodes (subnet) have no route to the
  public network and therefore will not be able to access the public
  endpoint. This is a rather significant limitation in deploying Ironic.
  Also, we seem to have broken backward compatibility as well as Ironic
  use to work in Pike without having to configure a public endpoint.

  Steps to reproduce
  ==
  1) enable Ironic in devstack
  2) delete the Ironic public endpoint in Keystone
  3) set 'valid_interfaces = internal' in the [ironic] section in nova.conf and 
restart nova-compute service
  4) try to provision a server and it will fail with errors similar to these in 
nova-compute logs

  2019-02-28 18:00:28.136 48891 ERROR nova.virt.ironic.driver [req-
  4bace607-0ab6-45b5-911b-1df5fbcc0e01 None None] An unknown error has
  occurred when trying to get the list of nodes from the Ironic
  inventory. Error: Must provide Keystone credentials or user-defined
  endpoint, error was: publicURL endpoint for baremetal service not
  found: AmbiguousAuthSystem: Must provide Keystone credentials or user-
  defined endpoint, error was: publicURL endpoint for baremetal service
  not found

  Expected result
  ===
  Server created without error.

  
  Actual result
  =
  Server failed to create, with errors similar to these in nova-compute logs

  2019-02-28 18:00:28.136 48891 ERROR nova.virt.ironic.driver [req-
  4bace607-0ab6-45b

[Yahoo-eng-team] [Bug 1804482] Re: Remove obsolete endpoint policies from policy.v3cloudsample.json

2019-03-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/619333
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=6c6c6049f5558f866270caa193abd9d6c673e296
Submitter: Zuul
Branch:master

commit 6c6c6049f5558f866270caa193abd9d6c673e296
Author: Lance Bragstad 
Date:   Wed Nov 21 17:48:31 2018 +

Remove endpoint policies from policy.v3cloudsample.json

By incorporating system-scope and default roles, we've effectively
made these policies obsolete. We can simplify what we maintain and
provide a more consistent, unified view of default endpoint behavior
by removing them.

Change-Id: I423e54c359b787efdda70f5d141f21e9103f3524
Closes-Bug: 1804482


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1804482

Title:
  Remove obsolete endpoint policies from policy.v3cloudsample.json

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Once support for scope types landed in the endpoint API policies, the
  policies in policy.v3cloudsample.json became obsolete [0][1].

  We should add formal protection for the policies with enforce_scope =
  True in keystone.tests.unit.protection.v3 and remove the old policies
  from the v3 sample policy file.

  This will reduce confusion by having a true default policy for
  endpoints.

  [0] https://review.openstack.org/#/c/525695/
  [1] 
http://git.openstack.org/cgit/openstack/keystone/tree/etc/policy.v3cloudsample.json?id=fb73912d87b61c419a86c0a9415ebdcf1e186927#n25

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1804482/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818131] Re: GET /servers/{server_id} response for a down-cell instance does not include links

2019-03-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/640302
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=a0b1951d2ad2195c26d7f372c54837f6f88fe1f9
Submitter: Zuul
Branch:master

commit a0b1951d2ad2195c26d7f372c54837f6f88fe1f9
Author: Surya Seetharaman 
Date:   Fri Mar 1 10:58:26 2019 +0100

Add "links" in the response of "nova show" for a down-cell instance

The down-cell microversion 2.69 just recently merged and it returns
links in the response for GET /servers/detail and GET /servers but not
for GET /servers/{server_id} which was an oversight because that API
returns links normally.

We should include the links key in the 'nova show' case as well and this
patch does exactly that.

Typically this would require a microversion change but given the code
merged recently and is not yet released we are just fixing this
oversight through this patch without a microversion bump.

Closes-Bug: #1818131
Change-Id: I2ce03df994f59c37b5ce3102c4e7165d17701798


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1818131

Title:
  GET /servers/{server_id} response for a down-cell instance does not
  include links

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The down-cell microversion 2.69 just recently merged and returns links
  in the response for GET /servers/detail but not GET
  /servers/{server_id} (show) which was an oversight because that API
  returns links normally:

  https://developer.openstack.org/api-ref/compute/?expanded=show-server-
  details-detail#show-server-details

  We should include the links in the 'show' case as well and make sure
  to update the docs in the compute API guide:

  https://developer.openstack.org/api-guide/compute/down_cells.html

  
https://review.openstack.org/#/c/621474/28/nova/api/openstack/compute/views/servers.py@180

  Typically this would require a microversion change but given the code
  merged on Feb 15 (https://review.openstack.org/#/c/635146/) and is not
  yet released I think we can bend the microversion rules and just fix
  this oversight since I doubt anyone has deployed or is using this yet
  since it's a pretty advanced use case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1818131/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818292] [NEW] POLICY_CHECK_FUNCTION as string change incomplete

2019-03-01 Thread David Lyle
Public bug reported:

Change in I8a346e55bb98e4e22e0c14a614c45d493d20feb4 to make
POLICY_CHECK_FUNCTION a string rather than a function was incomplete.

The case in horizon/tables/base.py is problematic in particular and
results in raising the TypeError: 'str' object is not callable error.

The is another instance that is not problematic, but should be changed
for consistency sake.

** Affects: horizon
 Importance: Undecided
 Assignee: David Lyle (david-lyle)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1818292

Title:
  POLICY_CHECK_FUNCTION as string change incomplete

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Change in I8a346e55bb98e4e22e0c14a614c45d493d20feb4 to make
  POLICY_CHECK_FUNCTION a string rather than a function was incomplete.

  The case in horizon/tables/base.py is problematic in particular and
  results in raising the TypeError: 'str' object is not callable error.

  The is another instance that is not problematic, but should be changed
  for consistency sake.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1818292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606741] Re: Metadata service for instances is unavailable when the l3-agent on the compute host is dvr_snat mode

2019-03-01 Thread OpenStack Infra
** Changed in: neutron
   Status: Won't Fix => In Progress

** Changed in: neutron
 Assignee: (unassigned) => Slawek Kaplonski (slaweq)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1606741

Title:
  Metadata service for instances is unavailable when the l3-agent on the
  compute host  is dvr_snat mode

Status in neutron:
  In Progress

Bug description:
  In my mitaka environment, there are five nodes here, including
  controller, network1, network2, computer1, computer2 node. I start
  l3-agents with dvr_snat mode in all network and compute nodes and set
  enable_metadata_proxy to true in l3-agent.ini. It works well for most
  neutron services unless the metadata proxy service. When I run command
  "curl http://169.254.169.254"; in an instance booting from cirros, it
  returns "curl: couldn't connect to host" and the instance can't fetch
  metadata in its first booting.

  * Pre-conditions: start l3-agent with dvr_snat mode in all computer
  and network nodes and set enable_metadata_proxy to true in
  l3-agent.ini.

  * Step-by-step reproduction steps:
  1.create a network and a subnet under this network;
  2.create a router;
  3.add the subnet to the router
  4.create an instance with cirros (or other images) on this subnet
  5.open the console for this instance and run command 'curl 
http://169.254.169.254' in bash, waiting for result.

  * Expected output: this command should return the true metadata info
  with the command  'curl http://169.254.169.254'

  * Actual output:  the command actually returns "curl: couldn't connect
  to host"

  * Version:
    ** Mitaka
    ** All hosts are centos7

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1606741/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818092] Re: hypervisor check in _check_instance_has_no_numa() is broken

2019-03-01 Thread Matt Riedemann
** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Changed in: nova/rocky
   Status: New => In Progress

** Changed in: nova/rocky
   Importance: Undecided => High

** Changed in: nova/rocky
 Assignee: (unassigned) => Stephen Finucane (stephenfinucane)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1818092

Title:
  hypervisor check in _check_instance_has_no_numa() is broken

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) rocky series:
  In Progress

Bug description:
  In commit ae2e5650d "Fail to live migration if instance has a NUMA
  topology" there is a check against hypervisor_type.

  Unfortunately it tests against the value "obj_fields.HVType.KVM".
  Even when KVM is supported by qemu the libvirt driver will still
  report the hypervisor type as "QEMU". So we need to fix up the
  hypervisor type check otherwise we'll always fail the check.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1818092/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818183] Re: non-admin user live migration failed due to auth token expire

2019-03-01 Thread Matt Riedemann
You have to configure nova to use service user auth tokens as described
here:

https://docs.openstack.org/nova/latest/admin/support-compute.html#user-
token-times-out-during-long-running-operations

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1818183

Title:
  non-admin user live migration failed due to auth token expire

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Description
  ===
  When non-admin live migration take a long time, the keystone auth token may 
expire, that cloud cause post step fail.

  Steps to reproduce
  ==
  1. Allow non-admin user to perform live migration
  2. Set a breakpoint in _post_live_migration method of compute manager
  3. Set keystone auth token expiration time to a low value
  4. Start a live migrate, once breakpoint is reached, wait for auth token 
expire, then continue

  Expected result
  ===
  Live migration finished successful.

  Actual result
  =
  Live migration failed cause authorized failed in setup_networks_on_host method
  Instance still in migrate status, but nothing to do

  Environment
  ===
  1. I've using rocky branch on 3 nodes, and deployed by kolla-ansible
  2. Libvirt + kvm + Ceph + Neutron vxlan

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1818183/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818252] [NEW] Incorrect logging instance uuid in nova logs

2019-03-01 Thread Michal Arbet
Public bug reported:

Hi,

Found a logging mistake in nova :

1596dd6225ef4abea7762c8b040b3f55 d60b403029ad41888c5822584263b983 - default 
default] [instance: 26bec746-110b-4777-af3f-15143b473667] Migrating instance to 
p6r01-nd02 finished successfully.
2019-03-01 14:40:29.949 2262558 INFO nova.scheduler.client.report 
[req-4439c30b-0f5b-4982-b775-37b2062e849c 1596dd6225ef4abea7762c8b040b3f55 
d60b403029ad41888c5822584263b983 - default default] Deleted allocation for 
instance 4c7621bb-c34b-4e57-82ee-e9cea87d7a8b

There is  " Deleted allocation for instance 4c7621bb-c34b-4e57-82ee-
e9cea87d7a8b  "  , which is bad uuid ( this is uuid for migration , no
for instance )

It should be  26bec746-110b-4777-af3f-15143b473667 ( instance uuid )

Thanks,
Michal

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1818252

Title:
  Incorrect logging instance uuid in nova logs

Status in OpenStack Compute (nova):
  New

Bug description:
  Hi,

  Found a logging mistake in nova :

  1596dd6225ef4abea7762c8b040b3f55 d60b403029ad41888c5822584263b983 - default 
default] [instance: 26bec746-110b-4777-af3f-15143b473667] Migrating instance to 
p6r01-nd02 finished successfully.
  2019-03-01 14:40:29.949 2262558 INFO nova.scheduler.client.report 
[req-4439c30b-0f5b-4982-b775-37b2062e849c 1596dd6225ef4abea7762c8b040b3f55 
d60b403029ad41888c5822584263b983 - default default] Deleted allocation for 
instance 4c7621bb-c34b-4e57-82ee-e9cea87d7a8b

  There is  " Deleted allocation for instance 4c7621bb-c34b-4e57-82ee-
  e9cea87d7a8b  "  , which is bad uuid ( this is uuid for migration , no
  for instance )

  It should be  26bec746-110b-4777-af3f-15143b473667 ( instance uuid )

  Thanks,
  Michal

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1818252/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1784590] Re: neutron-dynamic-routing bgp agent should have options for MP-BGP

2019-03-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/608302
Committed: 
https://git.openstack.org/cgit/openstack/neutron-dynamic-routing/commit/?id=fc2dae7c93eb8606bcf0c77b6691595f69502c20
Submitter: Zuul
Branch:master

commit fc2dae7c93eb8606bcf0c77b6691595f69502c20
Author: Ryan Tidwell 
Date:   Fri Oct 5 10:45:38 2018 -0500

Enable MP-BGP capabilities in Ryu BGP driver

This change enables all capabilties regardless of peer address
family, thereby enabling announcement of IPv6 prefixes over IPv4
sessions and vice-versa. Peers can opt in/out with capabilities
configured on the remote end of the session.

Change-Id: I7b241cdfddcecbdd8cdde2e88de35b9be6982451
Closes-Bug: #1784590


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1784590

Title:
  neutron-dynamic-routing bgp agent should have options for MP-BGP

Status in neutron:
  Fix Released

Bug description:
  neutron-dynamic-routing

  The implementation of BGP with Ryu supports IPv4 and IPv6 peers but
  the MP-BGP capabilities is announced based on if the peer is a v4 or
  v6 address.

  If you want to use a IPv4 peer but announce IPv6 prefixes this will
  not work because in services/bgp/agent/driver/ryu/driver.py in the
  function add_bgp_peer() it disables the IPv6 MP-BGP capability if the
  peer IP is a IPv4 address.

  This should be extended to support setting the capabilities manually,
  if you change the enable_ipv6 variable in the add_bgp_peer() function
  to True it will correctly announce IPv6 prefixes over the IPv4 BGP
  peer if the upstream router (the other side) supports the MP-BGP IPv6
  capability.

  Should be easy to implement with a "mode" config option that can be
  set to auto or manual and then options to override the capabilities.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1784590/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818092] Re: hypervisor check in _check_instance_has_no_numa() is broken

2019-03-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/635350
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=4da058723332a58323640fabce3d95902946a07d
Submitter: Zuul
Branch:master

commit 4da058723332a58323640fabce3d95902946a07d
Author: Chris Friesen 
Date:   Wed Feb 6 14:57:12 2019 -0600

fix up numa-topology live migration hypervisor check

It turns out that even when KVM is supported by qemu the libvirt
driver will report the hypervisor type as "QEMU".  So we need to
fix up the hypervisor type check in the live migration code that
checks whether the instance has a NUMA topology.

Closes-Bug: #1818092
Change-Id: I5127227a1e3d76dd413a820b048547ba578aff6b
Signed-off-by: Chris Friesen 


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1818092

Title:
  hypervisor check in _check_instance_has_no_numa() is broken

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  In commit ae2e5650d "Fail to live migration if instance has a NUMA
  topology" there is a check against hypervisor_type.

  Unfortunately it tests against the value "obj_fields.HVType.KVM".
  Even when KVM is supported by qemu the libvirt driver will still
  report the hypervisor type as "QEMU". So we need to fix up the
  hypervisor type check otherwise we'll always fail the check.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1818092/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818233] [NEW] ping_ip_address used without assertion in Neutron Tempest Plugin

2019-03-01 Thread Assaf Muller
Public bug reported:

The Neutron Tempest Plugin has an ping_ip_address helper in:

neutron_tempest_plugin/scenario/base.py

It's used in several places:

git grep -n ping_ip_address | cut -d":" -f1-2
neutron_tempest_plugin/scenario/base.py:313
neutron_tempest_plugin/scenario/test_basic.py:39
neutron_tempest_plugin/scenario/test_security_groups.py:118
neutron_tempest_plugin/scenario/test_security_groups.py:146
neutron_tempest_plugin/scenario/test_security_groups.py:152
neutron_tempest_plugin/scenario/test_security_groups.py:178
neutron_tempest_plugin/scenario/test_security_groups.py:193
neutron_tempest_plugin/scenario/test_security_groups.py:208
neutron_tempest_plugin/scenario/test_security_groups.py:261
neutron_tempest_plugin/scenario/test_security_groups.py:292

In all places it's used without an assertion. Meaning that if the ping
fails, it'll timeout (CONF.validation.ping_timeout), then continue the
test as if nothing happened. The test will not necessarily fail.

** Affects: neutron
 Importance: Low
 Status: New


** Tags: tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1818233

Title:
  ping_ip_address used without assertion in Neutron Tempest Plugin

Status in neutron:
  New

Bug description:
  The Neutron Tempest Plugin has an ping_ip_address helper in:

  neutron_tempest_plugin/scenario/base.py

  It's used in several places:

  git grep -n ping_ip_address | cut -d":" -f1-2
  neutron_tempest_plugin/scenario/base.py:313
  neutron_tempest_plugin/scenario/test_basic.py:39
  neutron_tempest_plugin/scenario/test_security_groups.py:118
  neutron_tempest_plugin/scenario/test_security_groups.py:146
  neutron_tempest_plugin/scenario/test_security_groups.py:152
  neutron_tempest_plugin/scenario/test_security_groups.py:178
  neutron_tempest_plugin/scenario/test_security_groups.py:193
  neutron_tempest_plugin/scenario/test_security_groups.py:208
  neutron_tempest_plugin/scenario/test_security_groups.py:261
  neutron_tempest_plugin/scenario/test_security_groups.py:292

  In all places it's used without an assertion. Meaning that if the ping
  fails, it'll timeout (CONF.validation.ping_timeout), then continue the
  test as if nothing happened. The test will not necessarily fail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1818233/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815797] Re: "rpc_response_max_timeout" configuration variable not present in OVS agent

2019-03-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/636719
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=84f8ae9d1f696b2561b634336338df0e14c37116
Submitter: Zuul
Branch:master

commit 84f8ae9d1f696b2561b634336338df0e14c37116
Author: Rodolfo Alonso Hernandez 
Date:   Wed Feb 13 19:01:06 2019 +

Add "rpc_response_max_timeout" config variable in OVS agent

The configuration variable "rpc_response_max_timeout" is not defined
in the OVS agent configuration. When the agent is stopped (SIGTERM),
the exception is raised. This error can be seen in the fullstack tests.

Change-Id: Ieedb6e87a4e98efef0f895566f7d4d88c3cd9336
Closes-Bug: #1815797


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1815797

Title:
  "rpc_response_max_timeout" configuration variable not present in OVS
  agent

Status in neutron:
  Fix Released

Bug description:
  The configuration variable "rpc_response_max_timeout" is not defined
  in the OVS agent. When the agent is stopped (SIGTERM), the exception
  is raised. This error can be seen in the fullstack tests.

  Error log: http://logs.openstack.org/52/636652/1/check/neutron-
  fullstack/91b459a/logs/dsvm-fullstack-
  
logs/TestUninterruptedConnectivityOnL2AgentRestart.test_l2_agent_restart_OVS,VLANs
  ,openflow-cli_/neutron-openvswitch-agent--2019-02-13--
  15-54-07-617752.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1815797/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818224] [NEW] IPv6 forwarding disabled on L3HA routers without gateway

2019-03-01 Thread Slawek Kaplonski
Public bug reported:

When L3HA router is created, it is first transitioned to backup on all
nodes. In such case IPv6 forwarding is disabled in router's namespace.
It was introduced by commit
https://github.com/openstack/neutron/commit/676a3ebe2f5b62f0ce7a3f7f434526931d5504a5

Later router is transitioned to be master on one of nodes. And then IPv6
forwarding should be enabled. Unfortunately it is enabled only when
gateway is configured:
https://github.com/openstack/neutron/blob/929d3fe9f49aeea817c8a922d0b28d605cc9b562/neutron/agent/l3/router_info.py#L704

So if there is no gateway connected to router and it has only tenant
networks ports, IPv6 forwarding will not be turned on and IPv6
connectivity between subnets will not be possible.

** Affects: neutron
 Importance: High
 Assignee: Slawek Kaplonski (slaweq)
 Status: New


** Tags: l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1818224

Title:
  IPv6 forwarding disabled on L3HA routers without gateway

Status in neutron:
  New

Bug description:
  When L3HA router is created, it is first transitioned to backup on all
  nodes. In such case IPv6 forwarding is disabled in router's namespace.
  It was introduced by commit
  
https://github.com/openstack/neutron/commit/676a3ebe2f5b62f0ce7a3f7f434526931d5504a5

  Later router is transitioned to be master on one of nodes. And then
  IPv6 forwarding should be enabled. Unfortunately it is enabled only
  when gateway is configured:
  
https://github.com/openstack/neutron/blob/929d3fe9f49aeea817c8a922d0b28d605cc9b562/neutron/agent/l3/router_info.py#L704

  So if there is no gateway connected to router and it has only tenant
  networks ports, IPv6 forwarding will not be turned on and IPv6
  connectivity between subnets will not be possible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1818224/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1817623] Re: Create a domain, projects, users, and roles in keystone

2019-03-01 Thread Colleen Murphy
Okay, closing then. Thanks!

** Changed in: keystone
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1817623

Title:
  Create a domain, projects, users, and roles in keystone

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  - [x] This doc is inaccurate in this way: __
  [root@esc ~]# openstack domain create --description "An Example Domain" 
example
  openstack: 'domain create --description An Example Domain example' is not an 
openstack command. See 'openstack --help'.

  
  ---
  Release: 13.0.3.dev1 on 2018-11-21 21:10
  SHA: 34185638dbf5f4421a44e44c7c245517eb79c938
  Source: 
https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/install/keystone-users-rdo.rst
  URL: 
https://docs.openstack.org/keystone/queens/install/keystone-users-rdo.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1817623/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818213] [NEW] Horizon pop-up "Create image" does not have confirmation alert

2019-03-01 Thread Vadym Markov
Public bug reported:

Step to reproduce:
1. Deploy environment;
2. Open Horizon dashboard;
3. Try to upload a huge image to glance via Horizon
4. Try to close pop-up "Create image"

Expected result:
Appear confirmation alert

Actual result:
Pop-up closed without any alerts

In any case, user have possibility to close window without confirmation
and lose all data in input fields.

** Affects: horizon
 Importance: Undecided
 Assignee: Vadym Markov (vmarkov)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1818213

Title:
  Horizon pop-up "Create image" does not have confirmation alert

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Step to reproduce:
  1. Deploy environment;
  2. Open Horizon dashboard;
  3. Try to upload a huge image to glance via Horizon
  4. Try to close pop-up "Create image"

  Expected result:
  Appear confirmation alert

  Actual result:
  Pop-up closed without any alerts

  In any case, user have possibility to close window without
  confirmation and lose all data in input fields.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1818213/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1813459] Re: Mounting the cinder storage dashboard shows that the mount was successful but it turns out that the local store is the one mounted

2019-03-01 Thread Akihiro Motoki
Per #32 comment, it turns out horizon is not related. I am removing
horizon from the affected projects to avoid uninteresting bug email from
horizon developer perspective.

** No longer affects: horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1813459

Title:
  Mounting the cinder storage dashboard shows that the mount was
  successful but it turns out that the local store is the one mounted

Status in Zun:
  New

Bug description:
  1:Dashboard: Error: Unable to retrieve attachment information.

  2:My configuration:
  vim  /etc/zun/zun.conf
  [volume]
  driver = cinder
  volume_dir = /var/lib/zun/mnt

  3:Once the container is up, a directory is generated at /var/lib/zun/mnt/
  /var/lib/zun/mnt/8bb97e08-dd28-43e8-a9df-cbad4cf924d7

  4:Container after deleting the/var/lib/zun/mnt/8bb97e08-dd28-43e8
  -a9df-cbad4cf924d7 will be deleted

To manage notifications about this bug go to:
https://bugs.launchpad.net/zun/+bug/1813459/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp