[Yahoo-eng-team] [Bug 1833156] [NEW] neutron fwaas v2 log function does not work

2019-06-17 Thread zhanghao
Public bug reported:

openstack version:rocky
operating system:centos7
libnetfilter_log-1.0.1-7.el7.x86_64

neutron.conf
[DEFAULT]
service_plugins = router,firewall_v2,log
[service_providers]
service_provider = 
FIREWALL_V2:fwaas_db:neutron_fwaas.services.firewall.service_drivers.agents.agents.FirewallAgentDriver:default

fwaas_driver.ini 
[fwaas]
agent_version = v2
driver = 
neutron_fwaas.services.firewall.service_drivers.agents.drivers.linux.iptables_fwaas_v2.IptablesFwaasDriver
enabled = True

l3_agent.ini
[agent]
extensions = fwaas_v2,fwaas_v2_log

Topology
vm1 172.16.10.14
vm2 172.16.20.12
r1 172.16.10.1
   172.16.20.1

#openstack firewall group rule show deny_ping
++---+
| Field  | Value |
++---+
| Action | deny  |
| Description|   |
| Destination IP Address | 172.16.20.12  |
| Destination Port   | None  |
| Enabled| True  |
| ID | a3231ec7-f0a0-48cd-b063-2bf0348ee0c5  |
| IP Version | 4 |
| Name   | deny_ping |
| Project| f8c73e555a294972964781606efb5291  |
| Protocol   | icmp  |
| Shared | False |
| Source IP Address  | 172.16.10.14  |
| Source Port| None  |
| firewall_policy_id | [u'cd9b4031-7d8c-4721-99aa-dedac7e1317f'] |
| project_id | f8c73e555a294972964781606efb5291  |
++---+

#openstack network log show my-log
+-+--+
| Field   | Value|
+-+--+
| Description |  |
| Enabled | True |
| Event   | ALL  |
| ID  | 009cdc65-360d-46c1-9366-360c8b094351 |
| Name| my-log   |
| Project | f8c73e555a294972964781606efb5291 |
| Resource| 087a286e-bb7b-4583-bac4-0a7828c88e91 |
| Target  | None |
| Type| firewall_group   |
| created_at  | 2019-06-13T07:46:13Z |
| revision_number | 0|
| tenant_id   | f8c73e555a294972964781606efb5291 |
| updated_at  | 2019-06-13T07:46:13Z |
+-+--+

#ip netns exec qrouter-38b02e81-bb69-48aa-9ca1-23b371af0b7f iptables -nvL
Chain neutron-l3-agent-dropped (5 references)
 pkts bytes target prot opt in out source   destination 

   40  3360 NFLOG  all  --  qr-5feaec8e-8b *   0.0.0.0/0
0.0.0.0/0limit: avg 100/sec burst 25 nflog-prefix  
12876978778924028228
0 0 NFLOG  all  --  *  qr-5feaec8e-8b  0.0.0.0/0
0.0.0.0/0limit: avg 100/sec burst 25 nflog-prefix  
12876978778924028228
   40  3360 DROP   all  --  *  *   0.0.0.0/00.0.0.0/0   
 

--
Nflog has obtained the packet,but log file has no record information.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1833156

Title:
  neutron fwaas v2 log function does not work

Status in neutron:
  New

Bug description:
  openstack version:rocky
  operating system:centos7
  libnetfilter_log-1.0.1-7.el7.x86_64

  neutron.conf
  [DEFAULT]
  service_plugins = router,firewall_v2,log
  [service_providers]
  service_provider = 
FIREWALL_V2:fwaas_db:neutron_fwaas.services.firewall.service_drivers.agents.agents.FirewallAgentDriver:default

  fwaas_driver.ini 
  [fwaas]
  agent_version = v2
  driver = 
neutron_fwaas.services.firewall.service_drivers.agents.drivers.linux.iptables_fwaas_v2.IptablesFwaasDriver
  enabled = True

  l3_agent.ini
  [agent]
  extensions = fwaas_v2,fwaas_v2_log

  Topology
  vm1 172.16.10.14
  vm2 172.16.20.12
  r1 172.16.10.1
 172.16.20.1

  #openstack firewall group rule show deny_ping
  ++---+
  | Field  | Value |
  

[Yahoo-eng-team] [Bug 1833130] [NEW] Quota legacy method warning logged even without CONF.quota.count_usage_from_placement=True

2019-06-17 Thread melanie witt
Public bug reported:

The following warning is being logged even when
CONF.quota.count_usage_from_placement is not set to True:

Jun 17 18:10:23.399734 ubuntu-bionic-rax-dfw-0007867224
devstack@n-api.service[9186]: WARNING nova.quota [None req-3ba6d21b-40e4
-47bb-8d38-ae2647245cbd None None] Falling back to legacy quota counting
method for instances, cores, and ram

This is not correct and results in constant warnings being logged for
users who have not opted-in to counting quota usage from placement.

Example from a gate run:

http://logs.openstack.org/67/652967/6/check/tempest-full-
py3/02860e1/controller/logs/screen-n-api.txt.gz#_Jun_17_18_10_23_399734

The message is logged 186 times in the file ^ when
CONF.quota.count_usage_from_placement = False.

** Affects: nova
 Importance: Low
 Assignee: melanie witt (melwitt)
 Status: Triaged


** Tags: quotas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1833130

Title:
  Quota legacy method warning logged even without
  CONF.quota.count_usage_from_placement=True

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  The following warning is being logged even when
  CONF.quota.count_usage_from_placement is not set to True:

  Jun 17 18:10:23.399734 ubuntu-bionic-rax-dfw-0007867224
  devstack@n-api.service[9186]: WARNING nova.quota [None req-3ba6d21b-
  40e4-47bb-8d38-ae2647245cbd None None] Falling back to legacy quota
  counting method for instances, cores, and ram

  This is not correct and results in constant warnings being logged for
  users who have not opted-in to counting quota usage from placement.

  Example from a gate run:

  http://logs.openstack.org/67/652967/6/check/tempest-full-
  py3/02860e1/controller/logs/screen-n-api.txt.gz#_Jun_17_18_10_23_399734

  The message is logged 186 times in the file ^ when
  CONF.quota.count_usage_from_placement = False.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1833130/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1833125] [NEW] Remaining neutron-lbaas relevant code and documentation

2019-06-17 Thread Bernard Cafarelli
Public bug reported:

neutron-lbaas was deprecated for some time and is now completely retired
in Train cycle [0]

>From a quick grep in neutron repository, we still have references to it
as of June 17.

Some examples:
* Admin guide page [1] on configuration and usage
* LBaaS related policies in neutron/conf/policies/agent.py
* L3 DVR checking device_owner names DEVICE_OWNER_LOADBALANCER and 
DEVICE_OWNER_LOADBALANCERV2
* Relevant unit tests (mostly related to previous feature)

We should drop all of these from neutron repository

[0] http://lists.openstack.org/pipermail/openstack-discuss/2019-May/006142.html
[1] https://docs.openstack.org/neutron/latest/admin/config-lbaas.html

** Affects: neutron
 Importance: Low
 Status: New


** Tags: doc lbaas low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1833125

Title:
  Remaining neutron-lbaas relevant code and documentation

Status in neutron:
  New

Bug description:
  neutron-lbaas was deprecated for some time and is now completely
  retired in Train cycle [0]

  From a quick grep in neutron repository, we still have references to
  it as of June 17.

  Some examples:
  * Admin guide page [1] on configuration and usage
  * LBaaS related policies in neutron/conf/policies/agent.py
  * L3 DVR checking device_owner names DEVICE_OWNER_LOADBALANCER and 
DEVICE_OWNER_LOADBALANCERV2
  * Relevant unit tests (mostly related to previous feature)

  We should drop all of these from neutron repository

  [0] 
http://lists.openstack.org/pipermail/openstack-discuss/2019-May/006142.html
  [1] https://docs.openstack.org/neutron/latest/admin/config-lbaas.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1833125/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1833122] [NEW] FWaaS admin documentation is outdated

2019-06-17 Thread Bernard Cafarelli
Public bug reported:

https://docs.openstack.org/neutron/latest/admin/fwaas.html

This page has some issues:
* still references FWaaS v1
* mentions upcoming features in Ocata (did we implement it in the end?)
* may not be up-to-date for v2 API (features implemented in the meantime)

** Affects: neutron
 Importance: Low
 Status: New


** Tags: doc fwaas low-hanging-fruit

** Changed in: neutron
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1833122

Title:
  FWaaS admin documentation is outdated

Status in neutron:
  New

Bug description:
  https://docs.openstack.org/neutron/latest/admin/fwaas.html

  This page has some issues:
  * still references FWaaS v1
  * mentions upcoming features in Ocata (did we implement it in the end?)
  * may not be up-to-date for v2 API (features implemented in the meantime)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1833122/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1833120] [NEW] Compute schedulers in nova - missing ComputeFilter in enabled_filters default value

2019-06-17 Thread Matt Riedemann
Public bug reported:

- [x] This doc is inaccurate in this way:

The enabled_filters default value is listed in a few places and in two
of them the ComputeFilter is missing but is enabled by default:

https://docs.openstack.org/nova/latest/configuration/config.html#filter_scheduler.enabled_filters

One is in this section:

https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html
#filter-scheduler

and one here:

https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html
#configure-scheduler-to-support-host-aggregates

It would be better to just reference the configuration option which has
the default value for any given release than try to mirror that default
list in the docs like this since it's obviously error-prone.

---
Release:  on 2019-06-15 00:15:10
SHA: ea7293c7bed3e5c759523f8d6c69387a4bcd7b9f
Source: 
https://opendev.org/openstack/nova/src/doc/source/admin/configuration/schedulers.rst
URL: https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html

** Affects: nova
 Importance: Low
 Assignee: Matt Riedemann (mriedem)
 Status: Confirmed


** Tags: doc

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => Low

** Changed in: nova
 Assignee: (unassigned) => Matt Riedemann (mriedem)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1833120

Title:
  Compute schedulers in nova - missing ComputeFilter in enabled_filters
  default value

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  - [x] This doc is inaccurate in this way:

  The enabled_filters default value is listed in a few places and in two
  of them the ComputeFilter is missing but is enabled by default:

  
https://docs.openstack.org/nova/latest/configuration/config.html#filter_scheduler.enabled_filters

  One is in this section:

  https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html
  #filter-scheduler

  and one here:

  https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html
  #configure-scheduler-to-support-host-aggregates

  It would be better to just reference the configuration option which
  has the default value for any given release than try to mirror that
  default list in the docs like this since it's obviously error-prone.

  ---
  Release:  on 2019-06-15 00:15:10
  SHA: ea7293c7bed3e5c759523f8d6c69387a4bcd7b9f
  Source: 
https://opendev.org/openstack/nova/src/doc/source/admin/configuration/schedulers.rst
  URL: 
https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1833120/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1824858] Re: nova instance remnant left behind after cold migration completes

2019-06-17 Thread Wendy Mitchell
Fails in the check to see that the source host no longer has instance
files after the cold migration

Hosts have the following labels assigned ie. includes remote label
openstack-compute-node=enabled
openvswitch=enabled 
sriov=enabled
remote-storage=enabled

Recent load on 2+3 lab  also retested. These test continue to fail for
the same reason

BUILD_TYPE="Formal"
BUILD_ID="20190613T013000Z"

test_cold_migrate_vm[remote-0-0-None-2-volume-confirm] 
test_cold_migrate_vm[remote-1-0-None-1-volume-confirm]
test_cold_migrate_vm[remote-1-512-None-1-image-confirm] 
test_cold_migrate_vm[remote-0-0-None-2-image_with_vol-confirm]

** Changed in: nova
   Status: Incomplete => Invalid

** Changed in: nova
   Status: Invalid => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1824858

Title:
  nova instance remnant left behind after cold migration completes

Status in OpenStack Compute (nova):
  Confirmed
Status in StarlingX:
  Incomplete

Bug description:
  Brief Description
  -
  After cold migration to a new worker node, instances remnants are left behind

  
  Severity
  
  standard

  
  Steps to Reproduce
  --
  worker nodes compute-1 and compute-2 have label   remote-storage enabled
  1. Launch instance on compute-1
  2. cold migrate to compute-2
  3. confirm cold migration to complete

  
  Expected Behavior
  --
  Migration to compute-2 and cleanup on files on compute-1

  
  Actual Behavior
  
  At 16:35:24 cold migration for instance a416ead6-a17f-4bb9-9a96-3134b426b069  
completed to compute-2 but the following path is left behind on compute-1
  compute-1:/var/lib/nova/instances/a416ead6-a17f-4bb9-9a96-3134b426b069

  compute-1:/var/lib/nova/instances$ ls
  a416ead6-a17f-4bb9-9a96-3134b426b069 _base  locks
  a416ead6-a17f-4bb9-9a96-3134b426b069_resize  compute_nodes  lost+found

  
  compute-1:/var/lib/nova/instances$ ls
  a416ead6-a17f-4bb9-9a96-3134b426b069  _base  compute_nodes  locks  lost+found

  compute-1:/var/lib/nova/instances$ ls
  a416ead6-a17f-4bb9-9a96-3134b426b069  _base  compute_nodes  locks  lost+found


  2019-04-15T16:35:24.646749clear   700.010 Instance 
tenant2-migration_test-1 owned by tenant2 has been cold-migrated to host 
compute-2 waiting for confirmation
tenant=7f1d4223-3341-428a-9188-55614770e676.instance=a416ead6-a17f-4bb9-9a96-3134b426b069
   critical
  2019-04-15T16:35:24.482575log 700.168 Cold-Migrate-Confirm 
complete for instance tenant2-migration_test-1 enabled on host compute-2   
tenant=7f1d4223-3341-428a-9188-55614770e676.instance=a416ead6-a17f-4bb9-9a96-3134b426b069
   critical
  2019-04-15T16:35:16.815223log 700.163 Cold-Migrate-Confirm 
issued by tenant2 against instance tenant2-migration_test-1 owned by tenant2 on 
host compute-2 
tenant=7f1d4223-3341-428a-9188-55614770e676.instance=a416ead6-a17f-4bb9-9a96-3134b426b069
   critical
  2019-04-15T16:35:10.030068clear   700.009 Instance 
tenant2-migration_test-1 owned by tenant2 is cold migrating from host compute-1 
   
tenant=7f1d4223-3341-428a-9188-55614770e676.instance=a416ead6-a17f-4bb9-9a96-3134b426b069
   critical
  2019-04-15T16:35:09.971414set 700.010 Instance 
tenant2-migration_test-1 owned by tenant2 has been cold-migrated to host 
compute-2 waiting for confirmation
tenant=7f1d4223-3341-428a-9188-55614770e676.instance=a416ead6-a17f-4bb9-9a96-3134b426b069
   critical
  2019-04-15T16:35:09.970212log 700.162 Cold-Migrate complete 
for instance tenant2-migration_test-1 now enabled on host compute-2 waiting for 
confirmation  
tenant=7f1d4223-3341-428a-9188-55614770e676.instance=a416ead6-a17f-4bb9-9a96-3134b426b069
   critical
  2019-04-15T16:34:51.637687set 700.009 Instance 
tenant2-migration_test-1 owned by tenant2 is cold migrating from host compute-1 
   
tenant=7f1d4223-3341-428a-9188-55614770e676.instance=a416ead6-a17f-4bb9-9a96-3134b426b069
   critical
  2019-04-15T16:34:51.637636log 700.158 Cold-Migrate inprogress 
for instance tenant2-migration_test-1 from host compute-1   
tenant=7f1d4223-3341-428a-9188-55614770e676.instance=a416ead6-a17f-4bb9-9a96-3134b426b069
   critical
  2019-04-15T16:34:51.478442log 700.157 Cold-Migrate issued by 
tenant2 against instance tenant2-migration_test-1 owned by tenant2 from host 
compute-1   
tenant=7f1d4223-3341-428a-9188-55614770e676.instance=a416ead6-a17f-4bb9-9a96-3134b426b069
   critical
  2019-04-15T16:34:20.181155log 700.101 Instance 
tenant2-migration_test-1 is enabled on host compute-1  

[Yahoo-eng-team] [Bug 1832968] Re: neutron tempest test fails 100% due to abandoned ubuntu image release

2019-06-17 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/665530
Committed: 
https://git.openstack.org/cgit/openstack/neutron-tempest-plugin/commit/?id=1c95d624ae52df415f2de807959c80117aea0ea8
Submitter: Zuul
Branch:master

commit 1c95d624ae52df415f2de807959c80117aea0ea8
Author: LIU Yulong 
Date:   Sun Jun 16 10:36:56 2019 +0800

Fix: test fails due to image not found

No image in the following link anymore:
http://cloud-images.ubuntu.com/releases/16.04/release-20180622/
Replace with release name based image location.

Closes-Bug: #1832968
Change-Id: I92a0e1643b6d58c4d882ecd03bbc4df855f57301


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1832968

Title:
  neutron tempest test fails 100% due to abandoned ubuntu image release

Status in neutron:
  Fix Released

Bug description:
  Failure log:
  
http://logs.openstack.org/17/665517/1/check/neutron-tempest-plugin-dvr-multinode-scenario/95359a1/controller/logs/devstacklog.txt.gz

  neutron tempest image link:
  http://cloud-images.ubuntu.com/releases/16.04/release-20180622/

  Code:
  
https://github.com/openstack/neutron-tempest-plugin/blob/master/.zuul.yaml#L395
  
https://github.com/openstack/neutron-tempest-plugin/blob/master/.zuul.yaml#L510

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1832968/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1826519] Re: Ephemeral disk volume was not mounted after resizing from non-ephemeral flavor

2019-06-17 Thread Artom Lifshitz
I don't think this is Nova's responsibility - as long as the new block
device shows up with `lsblk`, it's up to the guest OS/tooling to mount
it (or not). I've changed the component to cloud-init in case there's
something actionable for them in this bug. Thanks!

** Project changed: nova => cloud-init

** Tags removed: libvirt resize

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1826519

Title:
  Ephemeral disk volume was not mounted after resizing from non-
  ephemeral flavor

Status in cloud-init:
  New

Bug description:
  Description
  ===
  After resizing from m1.flavor(no ephemeral disk) to d1.flavor(with ephemeral 
disk), ephemeral
  disk does not mounted on the VM.

  After digging the related code, I realize there is no action to re-run 
cloudinit's mount module.
  By default, cloudinit does not rerun previously running module. Since mount 
module does not take action, cloudinit does not write ephemeral disk data to 
/etc/fstab.

  Steps to reproduce
  ==
  1. Make VM with m1.small flavor which does not have ephemeral disk
  2. Resize VM with d1.small flavor which have ephemeral disk

  Expected result
  ===
  1. Ephemeral disk volume (/dev/sdb) was mounted at /mnt

  Actual result
  =
  1. There is /dev/sdb in there by 'lsblk', but not mounted

  Environment
  ===
  Stable/ocata
  Libvirt driver

  Logs & Configs
  ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1826519/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1640427] Re: Error with the content of the Horizon dashboard tutorial at http://docs.openstack.org/developer/horizon/topics/table_actions.html#adding-the-url

2019-06-17 Thread Vishal Manchanda
This is already fixed in master branch. For more information please
refer [1]. So marking it as invalid.

[1]
https://github.com/openstack/horizon/blob/master/doc/source/contributor/tutorials/dashboard.rst#urls

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1640427

Title:
  Error with the content of the Horizon dashboard tutorial at
  http://docs.openstack.org/developer/horizon/topics/table_actions.html
  #adding-the-url

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  This tutorial specifies the following changes to be made to urls.py
  file:

  from django.conf.urls import url

  from openstack_dashboard.dashboards.mydashboard.mypanel import views

  urlpatterns = [,
  url(r'^$',
  views.IndexView.as_view(), name='index'),
  url(r'^(?P[^/]+)/create_snapshot/$',
  views.CreateSnapshotView.as_view(),
  name='create_snapshot'),
  ]

  After this change has been made, and after I restart the httpd I get
  an error in the horizon_error.log as posted below:

  __import__(name)
  [:error] [pid 11724] [remote 172.22.225.42:0]   File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/mydashboard/mypanel/urls.py",
 line 6
  [:error] [pid 11724] [remote 172.22.225.42:0] urlpatterns = [,
  [:error] [pid 11724] [remote 172.22.225.42:0]^
  [:error] [pid 11724] [remote 172.22.225.42:0] SyntaxError: invalid syntax

  To fix this issue, we need to remove the comma next to the opening square 
bracket in urlpatterns = [,
  The resulting code should look like:

  urlpatterns = [
  url(r'^$',
  views.IndexView.as_view(), name='index'),
  url(r'^(?P[^/]+)/create_snapshot/$',
  views.CreateSnapshotView.as_view(),
  name='create_snapshot'),
  ]

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1640427/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1833085] [NEW] Zero-downtime upgrades lead to spurious token validation failures when caching is enabled

2019-06-17 Thread Sebastian Riese
Public bug reported:

When doing the zero-downtime upgrade routine of keystone and having
caching enabled we observe validation failures of valid tokens.

The problem is, that if both running versions share a cache, both may
cache validated tokens, but the type of the tokens changed between the
two versions (and the cache pickles the objects-to-cache directly). In
queens it is a dict, in rocky it is a dedicated type `TokenModel`. This
causes exceptions when the tokens are loaded from the cache and don't
have the expected attributes.

The offending code is

vs.

the `@MEMOIZE_TOKEN` decorator serializes the tokens into the cache, both 
versions use the same keyspace, but the type of the objects has changed.

Disabling the caching (by setting `[caching] enabled = false` in the
config) or disabling all but one keystone instances fixes the problem
(of course disabling all but one keystone instance defeats the whole
purpose of a zero-downtime upgrade – this was just done to validate the
cause of the issue).

This issue and the possible workaround (disabling the cache) should at
least be documented. If it is safe to run the instances with separate
caches (per instance or per version) this may be workaround with less of
a performance impact, but I am not sure, whether this would be safe with
respect to token invalidation. My understanding is, that on token
revocation the keystone instance handling the API request invalidates
the cache entry and adds the revocation event to the database. So if the
token was already stored as validated in the other cache, this would
cause the token to be accepted as valid by some of the keystone services
(which use the other cache which says it is valid). So with a load
balancer in front of the keystones the revoked token would sometimes
validate.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1833085

Title:
  Zero-downtime upgrades lead to spurious token validation failures when
  caching is enabled

Status in OpenStack Identity (keystone):
  New

Bug description:
  When doing the zero-downtime upgrade routine of keystone and having
  caching enabled we observe validation failures of valid tokens.

  The problem is, that if both running versions share a cache, both may
  cache validated tokens, but the type of the tokens changed between the
  two versions (and the cache pickles the objects-to-cache directly). In
  queens it is a dict, in rocky it is a dedicated type `TokenModel`.
  This causes exceptions when the tokens are loaded from the cache and
  don't have the expected attributes.

  The offending code is
  

  vs.
  

  the `@MEMOIZE_TOKEN` decorator serializes the tokens into the cache, both 
versions use the same keyspace, but the type of the objects has changed.

  Disabling the caching (by setting `[caching] enabled = false` in the
  config) or disabling all but one keystone instances fixes the problem
  (of course disabling all but one keystone instance defeats the whole
  purpose of a zero-downtime upgrade – this was just done to validate
  the cause of the issue).

  This issue and the possible workaround (disabling the cache) should at
  least be documented. If it is safe to run the instances with separate
  caches (per instance or per version) this may be workaround with less
  of a performance impact, but I am not sure, whether this would be safe
  with respect to token invalidation. My understanding is, that on token
  revocation the keystone instance handling the API request invalidates
  the cache entry and adds the revocation event to the database. So if
  the token was already stored as validated in the other cache, this
  would cause the token to be accepted as valid by some of the keystone
  services (which use the other cache which says it is valid). So with a
  load balancer in front of the keystones the revoked token would
  sometimes validate.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1833085/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1722584] Re: [SRU] Return traffic from metadata service may get dropped by hypervisor due to wrong checksum

2019-06-17 Thread Corey Bryant
This is fixed in last week's eoan/train snapshot.

** Changed in: neutron (Ubuntu Eoan)
   Status: Triaged => Fix Released

** Changed in: cloud-archive/train
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1722584

Title:
  [SRU] Return traffic from metadata service may get dropped by
  hypervisor due to wrong checksum

Status in Ubuntu Cloud Archive:
  Triaged
Status in Ubuntu Cloud Archive queens series:
  Triaged
Status in Ubuntu Cloud Archive rocky series:
  Triaged
Status in Ubuntu Cloud Archive stein series:
  Triaged
Status in Ubuntu Cloud Archive train series:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Bionic:
  Triaged
Status in neutron source package in Cosmic:
  Triaged
Status in neutron source package in Disco:
  Triaged
Status in neutron source package in Eoan:
  Fix Released

Bug description:
  [Impact]
  Prior addition of code to add checksum rules was found to cause problems with 
newer kernels. Patch subsequently reverted so this request is to backport those 
patches to the ubuntu archives.

  [Test Case]
  * deploy openstack (>= queens)
  * create router/network/instance (dvr=false,l3ha=false)
  * go to router ns on neutron-gateway and check that the following returns 
nothing
  sudo ip netns exec qrouter- iptables -t mangle -S| grep '\--sport 9697 -j 
CHECKSUM --checksum-fill'

  [Regression Potential]
  The original issue is no longer fixed once this patch is reverted.

  [Other Info]
  This revert patch does not remove rules added by the original patch so manual 
cleanup of those old rules is required.

  -
  We have a problem with the metadata service not being responsive, when the 
proxied in the router namespace on some of our networking nodes after upgrading 
to Ocata (Running on CentOS 7.4, with the RDO packages).

  Instance routes traffic to 169.254.169.254 to it's default gateway.
  Default gateway is an OpenStack router in a namespace on a networking node.

  - Traffic gets sent from the guest,
  - to the router,
  - iptables routes it to the metadata proxy service,
  - response packet gets routed back, leaving the namespace
  - Hypervisor gets the packet in
  - Checksum of packet is wrong, and the packet gets dropped before putting it 
on the bridge

  Based on the following bug https://bugs.launchpad.net/openstack-
  ansible/+bug/1483603, we found that adding the following iptable rule
  in the router namespace made this work again: 'iptables -t mangle -I
  POSTROUTING -p tcp --sport 9697 -j CHECKSUM --checksum-fill'

  (NOTE: The rule from the 1st comment to the bug did solve access to
  the metadata service, but the lack of precision introduced other
  problems with the network)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1722584/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1832452] Re: NumaTopolgyFilter dosen't work as we expected when pci_numa_policy set as 'legacy'

2019-06-17 Thread Matt Riedemann
>From https://review.opendev.org/#/c/664838/1/nova/pci/stats.py@a272 it
sounds like the "preferred" policy is what the user wanted.

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1832452

Title:
  NumaTopolgyFilter dosen't work as we expected when pci_numa_policy set
  as 'legacy'

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Description
  ===
  In NumaTopologyFilter, we set 'pci_numa_policy=legacy' and 'hw:numa_nodes=1' 
in flavor, it will causes vm build failed when cpu located in numa1 and pci 
located in numa0 in compute node. In pci numa affinity policies, 'required' 
means allocated pci devices must be distributed at same numa node, 'legacy' 
means allocated pci devices should be distributed at same numa nodes if 
available, and 'preferred' is do not provider anything with stricter affinity 
to allocate pci devices. But it dosen't work as definition in practice.
  I found a logic error in filter pools for numa cells, 'legacy' will use 
filter pools but dosen't care numa has available pci devices which is same numa 
with cpu.
  So I change the condition 'or' to 'and', it has worked as we expected. 

  
  Steps to reproduce
  ==
  1. Configuration
  controller node 
  --- nova.conf ---
  [filter_scheduler]
  enabled_filters=...,NUMATopologyFilter
  [pci]
  alias = {"name": "QuickAssist","product_id": "10ed","vendor_id": 
"8086","device_type": "type-VF","numa_policy": "legacy"}

  compute node 
  lspci -vv | grep sriov_nic_bus_info # get the numa cell of sriov nic, suppose 
as numa0
   nova.conf -
  [DEFAULT]
  vpcu_pin_set = 25,26,27,28 # numa1
   
  2. create a sriov instance
  # create a sriov port
  $ neutron port-create --vnic-type direct network_id
  # create a flavor like this
  
++-+
  | Property   | Value  

 |
  
++-+
  | OS-FLV-DISABLED:disabled   | False  

 |
  | OS-FLV-EXT-DATA:ephemeral  | 0  

 |
  | disk   | 20 

 |
  | extra_specs| {"hw:pci_numa_policy": "legacy", 
"hw:vif_multiqueue_enabled": "true", "hw:numa_nodes": "1", "hw:cpu_cores": "4", 
"pci_passthrough:alias": "QuickAssist:1"} |
  | id | 430e1afd-a72b-41c6-b9b2-ea9b6aa9f037   

 |
  | name   | multiqueue 

 |
  | os-flavor-access:is_public | True   

 |
  | ram| 2048   

 |
  | rxtx_factor| 1.0

 |
  | swap   |

 |
  | vcpus  | 4  
 

[Yahoo-eng-team] [Bug 1822676] Re: novnc no longer sets token inside cookie

2019-06-17 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/649372
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=9606c80402f6db20d62b689c58aa8f024183628a
Submitter: Zuul
Branch:master

commit 9606c80402f6db20d62b689c58aa8f024183628a
Author: Mohammed Naser 
Date:   Tue Apr 2 11:34:58 2019 -0400

Add 'path' query parameter to console access url

Starting in noVNC v1.1.0, the token query parameter is no longer
forwarded via cookie [1]. We must instead use the 'path' query
parameter to pass the token through to the websocketproxy [2].
This means that if someone deploys noVNC v1.1.0, VNC consoles will
break in nova because the code is relying on the cookie functionality
that v1.1.0 removed.

This modifies the ConsoleAuthToken.access_url property to include the
'path' query parameter as part of the returned access_url that the
client will use to call the console proxy service.

This change is backward compatible with noVNC < v1.1.0. The 'path' query
parameter is a long supported feature in noVNC.

Co-Authored-By: melanie witt 

Closes-Bug: #1822676

[1] 
https://github.com/novnc/noVNC/commit/51f9f0098d306bbc67cc8e02ae547921b6f6585c
[2] https://github.com/novnc/noVNC/pull/1220

Change-Id: I2ddf0f4d768b698e980594dd67206464a9cea37b


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1822676

Title:
  novnc no longer sets token inside cookie

Status in OpenStack Compute (nova):
  Fix Released
Status in openstack-ansible:
  New

Bug description:
  For a long time, noVNC set the token inside a cookie so that when the
  /websockify request came in, we had it in the cookies and we could
  look it up from there and return the correct host.

  However, since the following commit, they've removed this behavior

  https://github.com/novnc/noVNC/commit/51f9f0098d306bbc67cc8e02ae547921b6f6585c
  #diff-1d6838e3812778e95699b90d530543a1L173

  This means that we're unable to use latest noVNC with Nova.  There is
  a really gross workaround of using the 'path' override in the URL for
  something like this

  http://foo/vnc_lite.html?path=?token=foo

  That feels pretty lame to me and it will have all deployment tools
  change their settings.  Also, this wasn't caught in CI because we
  deploy novnc from packages.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1822676/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1833051] Re: Unit tests in networking-bgpvpn are broken

2019-06-17 Thread Slawek Kaplonski
Patch proposed https://review.opendev.org/665637

** Project changed: neutron => bgpvpn

** Changed in: bgpvpn
   Status: Confirmed => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1833051

Title:
  Unit tests in networking-bgpvpn are broken

Status in networking-bgpvpn:
  In Progress

Bug description:
  Unit tests in networking-bgpvpn project are failing 100% times. See
  http://logs.openstack.org/31/662231/3/check/openstack-tox-
  py36/adb9abd/testr_results.html.gz for example.

To manage notifications about this bug go to:
https://bugs.launchpad.net/bgpvpn/+bug/1833051/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1832860] Re: Failed instances stuck in BUILD state after Rocky upgrade

2019-06-17 Thread Mark Goddard
** Changed in: kolla-ansible
Milestone: None => 8.0.0

** Project changed: kolla-ansible => kolla

** Changed in: kolla
Milestone: 8.0.0 => None

** No longer affects: kolla-ansible/rocky

** Changed in: kolla
   Importance: Undecided => High

** Also affects: kolla/rocky
   Importance: Undecided
   Status: New

** Also affects: kolla/stein
   Importance: Undecided
   Status: New

** Also affects: kolla/train
   Importance: High
   Status: New

** Changed in: kolla/stein
   Importance: Undecided => High

** Changed in: kolla/rocky
   Importance: Undecided => High

** Changed in: kolla/train
Milestone: None => 9.0.0

** Changed in: kolla/stein
Milestone: None => 8.0.0

** Changed in: kolla/rocky
Milestone: None => 7.0.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1832860

Title:
  Failed instances stuck in BUILD state after Rocky upgrade

Status in kolla:
  New
Status in kolla rocky series:
  New
Status in kolla stein series:
  New
Status in kolla train series:
  New
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Steps to reproduce
  ==

  Starting with a cloud running the Queens release, upgrade to Rocky.

  Create a flavor that cannot fit on any compute node, e.g.

  openstack flavor create --ram 1 --disk 2147483647 --vcpus
  1 huge

  Then create an instance using that flavor:

  openstack server create huge --flavor huge --image cirros --network
  demo-net

  Expected
  

  The instance fails to boot and ends up in the ERROR state.

  Actual
  ==

  The instance fails to boot and gets stuck in the BUILD state.

  From nova-conductor.log:

  2019-06-12 15:00:24.443 6 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
  2019-06-12 15:00:24.443 6 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 163, in 
_process_incoming
  2019-06-12 15:00:24.443 6 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2019-06-12 15:00:24.443 6 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 265, 
in dispatch
  2019-06-12 15:00:24.443 6 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2019-06-12 15:00:24.443 6 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 194, 
in _do_dispatch
  2019-06-12 15:00:24.443 6 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
  2019-06-12 15:00:24.443 6 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 1244, in 
schedule_and_build_instances
  2019-06-12 15:00:24.443 6 ERROR oslo_messaging.rpc.server tags=tags)
  2019-06-12 15:00:24.443 6 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 1193, in 
_bury_in_cell0
  2019-06-12 15:00:24.443 6 ERROR oslo_messaging.rpc.server 
instance.create()
  2019-06-12 15:00:24.443 6 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 226, in 
wrapper
  2019-06-12 15:00:24.443 6 ERROR oslo_messaging.rpc.server return fn(self, 
*args, **kwargs)
  2019-06-12 15:00:24.443 6 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/objects/instance.py", line 600, in create
  2019-06-12 15:00:24.443 6 ERROR oslo_messaging.rpc.server db_inst = 
db.instance_create(self._context, updates)
  2019-06-12 15:00:24.443 6 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/db/api.py", line 748, in instance_create
  2019-06-12 15:00:24.443 6 ERROR oslo_messaging.rpc.server return 
IMPL.instance_create(context, values)
  2019-06-12 15:00:24.443 6 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 170, in 
wrapper
  2019-06-12 15:00:24.443 6 ERROR oslo_messaging.rpc.server return f(*args, 
**kwargs)
  2019-06-12 15:00:24.443 6 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 154, in wrapper
  2019-06-12 15:00:24.443 6 ERROR oslo_messaging.rpc.server ectxt.value = 
e.inner_exc
  2019-06-12 15:00:24.443 6 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2019-06-12 15:00:24.443 6 ERROR oslo_messaging.rpc.server 
self.force_reraise()
  2019-06-12 15:00:24.443 6 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2019-06-12 15:00:24.443 6 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
  2019-06-12 15:00:24.443 6 ERROR 

[Yahoo-eng-team] [Bug 1833051] [NEW] Unit tests in networking-bgpvpn are broken

2019-06-17 Thread Slawek Kaplonski
Public bug reported:

Unit tests in networking-bgpvpn project are failing 100% times. See
http://logs.openstack.org/31/662231/3/check/openstack-tox-
py36/adb9abd/testr_results.html.gz for example.

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: gate-failure unittest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1833051

Title:
  Unit tests in networking-bgpvpn are broken

Status in neutron:
  Confirmed

Bug description:
  Unit tests in networking-bgpvpn project are failing 100% times. See
  http://logs.openstack.org/31/662231/3/check/openstack-tox-
  py36/adb9abd/testr_results.html.gz for example.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1833051/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1625683] Re: Gives error while running command from Horizon quickstart

2019-06-17 Thread Vishal Manchanda
Unable to reproduce this bug on master branch so making it as invalid.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1625683

Title:
  Gives error while running command from Horizon quickstart

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When I run Command :
  python manage.py migrate_settings --gendiff

  From:
  http://docs.openstack.org/developer/horizon/quickstart.html

  It gives error:
  http://paste.openstack.org/show/582262/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1625683/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1833041] [NEW] importing a keypair when keypair limit is reached will logout user

2019-06-17 Thread do3meli
Public bug reported:

when importing a keypair via horizon gui while the corresponding project
quota limit is reached will automatically logout the current user and
redirect him to the login screen.

this has been testet with version openstack rocky on ubuntu 18 with the
cloud archive repository. exact package: 3:14.0.2-0ubuntu2~cloud0

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1833041

Title:
  importing a keypair when keypair limit is reached will logout user

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  when importing a keypair via horizon gui while the corresponding
  project quota limit is reached will automatically logout the current
  user and redirect him to the login screen.

  this has been testet with version openstack rocky on ubuntu 18 with
  the cloud archive repository. exact package: 3:14.0.2-0ubuntu2~cloud0

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1833041/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp