[Yahoo-eng-team] [Bug 1728600] Re: Test test_network_basic_ops fails time to time, port doesn't become ACTIVE quickly

2019-01-21 Thread Attila Fazekas
Nova expected to wait for all connected port to become active on instance 
creation before reporting the instance active. 
No additional user/tempest wait is required, no random port status flipping is 
allowed.

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1728600

Title:
  Test test_network_basic_ops fails time to time, port doesn't become
  ACTIVE quickly

Status in OpenStack Compute (nova):
  New
Status in tempest:
  In Progress

Bug description:
  Test test_network_basic_ops fails time to time, port doesn't become
  ACTIVE quickly

  Trace:
  Traceback (most recent call last):
File "tempest/scenario/test_security_groups_basic_ops.py", line 185, in 
setUp
  self._deploy_tenant(self.primary_tenant)
File "tempest/scenario/test_security_groups_basic_ops.py", line 349, in 
_deploy_tenant
  self._set_access_point(tenant)
File "tempest/scenario/test_security_groups_basic_ops.py", line 316, in 
_set_access_point
  self._assign_floating_ips(tenant, server)
File "tempest/scenario/test_security_groups_basic_ops.py", line 322, in 
_assign_floating_ips
  client=tenant.manager.floating_ips_client)
File "tempest/scenario/manager.py", line 836, in create_floating_ip
  port_id, ip4 = self._get_server_port_id_and_ip4(thing)
File "tempest/scenario/manager.py", line 814, in _get_server_port_id_and_ip4
  "No IPv4 addresses found in: %s" % ports)
File "/usr/local/lib/python2.7/dist-packages/unittest2/case.py", line 845, 
in assertNotEqual
  raise self.failureException(msg)
  AssertionError: 0 == 0 : No IPv4 addresses found in: 
[{u'allowed_address_pairs': [], u'extra_dhcp_opts': [], u'updated_at': 
u'2017-10-30T10:04:41Z', u'device_owner': u'compute:None', u'revision_number': 
9, u'port_security_enabled': True, u'binding:profile': {}, u'fixed_ips': 
[{u'subnet_id': u'd522b2e5-7e56-4d08-843c-c434c3c2af97', u'ip_address': 
u'10.100.0.12'}], u'id': u'20d59775-906d-4390-b193-a8ec81817ddb', 
u'security_groups': [u'908eb03d-2477-49ab-ab9a-fcfae454', 
u'cf62ee1b-eb73-44d0-9ad8-65bb32885505'], u'binding:vif_details': 
{u'port_filter': True, u'ovs_hybrid_plug': True}, u'binding:vif_type': u'ovs', 
u'mac_address': u'fa:16:3e:02:f3:e8', u'project_id': 
u'0a8532fba2194d32996c3ba46ae35c96', u'status': u'BUILD', u'binding:host_id': 
u'cfg01', u'description': u'', u'tags': [], u'device_id': 
u'5ad8f2be-3cbb-49aa-8d72-e81ca6789665', u'name': u'', u'admin_state_up': True, 
u'network_id': u'49491fd4-2c1e-4c46-8166-b4648eb75f84', u'tenant_id': 
u'0a8532fba2194d32996c3ba46ae35c96', u'created_at': u'2017-10-30T10:04:37Z', 
u'binding:vnic_type': u'normal'}]

  Ran 1 test in 25.096s

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1728600/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1803925] Re: There is no interface for operators to migrate *all* the existing compute resource providers to be ready for nested providers

2019-01-21 Thread Tetsuro Nakamura
Abandoned https://review.openstack.org/#/c/619126/ in favor of
https://review.openstack.org/#/c/624943/, which is now committed.

** Changed in: nova
   Status: New => Won't Fix

** Changed in: nova
   Status: Won't Fix => Confirmed

** Changed in: nova
   Status: Confirmed => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1803925

Title:
  There is no interface for operators to migrate *all* the existing
  compute resource providers to be ready for nested providers

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  When nested resource provider feature was added in Rocky,
  root_provider_uuid column, which should be non-None value is created
  in the resource provider DB. For existing resource providers created
  before queens, we have an online data migration:

  
https://review.openstack.org/#/c/377138/62/nova/objects/resource_provider.py@917

  But it's only done via listing/showing resource providers. We should
  have explicit migration script something like "placement-manage db
  online_data_migrations" to make sure all the resource providers are
  ready for the nested provider feature, that is all the
  root_provider_uuid column has non-None value.

  This bug tracking can be closed when the following tasks are done
   - Provide something like "placement-manage db online_data_migrations" so 
that in Stein we are sure all the root_provider_uuid column is non-None value.
   - Clean placement/objects/resource_provider.py removing many TODOs like 
"Change this to an inner join when we are sure all root_provider_id values are 
NOT NULL"

  NOTE: This report is created after fixing/closing
  https://bugs.launchpad.net/nova/+bug/1799892 in a temporary way
  without the explicit DB migration script.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1803925/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1812676] [NEW] cloud-init-per mishandles commands with dashes

2019-01-21 Thread Vitaly Kuznetsov
Public bug reported:

It was found that when there is a dash in cloud-init-per command
name and cloud-init-per is executed through cloud-init's bootcmd, e.g:

bootcmd:
- cloud-init-per instance mycmd-bootcmd /usr/bin/mycmd

the command is executed on each boot. However, running the same
cloud-init-per command manually after boot doesn't reveal the issue. Turns
out the issue comes from 'migrator' cloud-init module which renames all
files in /var/lib/cloud/instance/sem/ replacing dashes with underscores. As
migrator runs before bootcmd it renames

/var/lib/cloud/instance/sem/bootper.mycmd-bootcmd.instance
to
/var/lib/cloud/instance/sem/bootper.mycmd_bootcmd.instance

so cloud-init-per doesn't see it and thinks that the comment was never ran
before. On next boot the sequence repeats.

There are multiple ways to resolve the issue. This patch takes the
following approach: 'canonicalize' sem names by replacing dashes with
underscores (this is consistent with post-'migrator' contents of
/var/lib/cloud/instance/sem/). We, however, need to be careful: in case
someone had a command with dashes before and he had migrator module enables
we need to see the old sem file (or the command will run again and this can
be as bad as formatting a partition!) so we add a small 'migrator' part to
cloud-init-per script itself checking for legacy sem names.

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Patch added: "0001-cloud-init-per-don-t-use-dashes-in-sem-names.patch"
   
https://bugs.launchpad.net/bugs/1812676/+attachment/5231108/+files/0001-cloud-init-per-don-t-use-dashes-in-sem-names.patch

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1812676

Title:
  cloud-init-per mishandles commands with dashes

Status in cloud-init:
  New

Bug description:
  It was found that when there is a dash in cloud-init-per command
  name and cloud-init-per is executed through cloud-init's bootcmd, e.g:

  bootcmd:
  - cloud-init-per instance mycmd-bootcmd /usr/bin/mycmd

  the command is executed on each boot. However, running the same
  cloud-init-per command manually after boot doesn't reveal the issue. Turns
  out the issue comes from 'migrator' cloud-init module which renames all
  files in /var/lib/cloud/instance/sem/ replacing dashes with underscores. As
  migrator runs before bootcmd it renames
  
  /var/lib/cloud/instance/sem/bootper.mycmd-bootcmd.instance
  to
  /var/lib/cloud/instance/sem/bootper.mycmd_bootcmd.instance

  so cloud-init-per doesn't see it and thinks that the comment was never ran
  before. On next boot the sequence repeats.

  There are multiple ways to resolve the issue. This patch takes the
  following approach: 'canonicalize' sem names by replacing dashes with
  underscores (this is consistent with post-'migrator' contents of
  /var/lib/cloud/instance/sem/). We, however, need to be careful: in case
  someone had a command with dashes before and he had migrator module enables
  we need to see the old sem file (or the command will run again and this can
  be as bad as formatting a partition!) so we add a small 'migrator' part to
  cloud-init-per script itself checking for legacy sem names.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1812676/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1809497] Re: _get_filterid_for_ip can generate an UnboundLocalError

2019-01-21 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/630773
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=e788d294584add2407f601a46f90b080825f
Submitter: Zuul
Branch:master

commit e788d294584add2407f601a46f90b080825f
Author: Slawek Kaplonski 
Date:   Mon Jan 14 22:29:25 2019 +0100

Support iproute2 4.15 in l3_tc_lib

In version 4.15 of iproute2 there was added support
for chain index in tc_filter [1].
Such version is available e.g. in Ubuntu 18.04 and it
has to be supported in l3_tc_lib regex to match
properly output of "tc filter" command.

[1] https://lwn.net/Articles/745643/

Closes-bug: #1809497
Change-Id: Id4066b5cff933ccd0dd3c751bf67b5d58af662d1


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1809497

Title:
  _get_filterid_for_ip can generate an UnboundLocalError

Status in neutron:
  Fix Released

Bug description:
  After fixing a bug in the L3 extension API,
  https://review.openstack.org/#/c/626401/ - the l3-agent is getting a
  traceback in the QoS code running the dvr multinode scenario job.

  neutron-l3-agent[14973]: ERROR neutron.agent.l3.agent [-] Failed to process 
compatible router: 7461a0e1-508c-4ff2-a559-6cc89a128ea5: UnboundLocalError: 
local variable 'filter_id' referenced before assignment
  neutron-l3-agent[14973]: ERROR neutron.agent.l3.agent Traceback (most recent 
call last):
  neutron-l3-agent[14973]: ERROR neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/agent.py", line 628, in 
_process_routers_if_compatible
  neutron-l3-agent[14973]: ERROR neutron.agent.l3.agent 
self._process_router_if_compatible(router)
  neutron-l3-agent[14973]: ERROR neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/agent.py", line 504, in 
_process_router_if_compatible
  neutron-l3-agent[14973]: ERROR neutron.agent.l3.agent 
self._process_updated_router(router)
  neutron-l3-agent[14973]: ERROR neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/agent.py", line 529, in 
_process_updated_router
  neutron-l3-agent[14973]: ERROR neutron.agent.l3.agent 
self.l3_ext_manager.update_router(self.context, router)
  neutron-l3-agent[14973]: ERROR neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/l3_agent_extensions_manager.py", line 54, 
in update_router
  neutron-l3-agent[14973]: ERROR neutron.agent.l3.agent 
extension.obj.update_router(context, data)
  neutron-l3-agent[14973]: ERROR neutron.agent.l3.agent   File 
"/usr/local/lib/python3.6/dist-packages/oslo_concurrency/lockutils.py", line 
328, in inner
  neutron-l3-agent[14973]: ERROR neutron.agent.l3.agent return f(*args, 
**kwargs)
  neutron-l3-agent[14973]: ERROR neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/extensions/qos/fip.py", line 292, in 
update_router
  neutron-l3-agent[14973]: ERROR neutron.agent.l3.agent 
self.process_floating_ip_addresses(context, router_info)
  neutron-l3-agent[14973]: ERROR neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/extensions/qos/fip.py", line 277, in 
process_floating_ip_addresses
  neutron-l3-agent[14973]: ERROR neutron.agent.l3.agent 
self._remove_fip_rate_limit(device, fip)
  neutron-l3-agent[14973]: ERROR neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/extensions/qos/fip.py", line 164, in 
_remove_fip_rate_limit
  neutron-l3-agent[14973]: ERROR neutron.agent.l3.agent 
tc_wrapper.clear_ip_rate_limit(direction, fip_ip)
  neutron-l3-agent[14973]: ERROR neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/linux/l3_tc_lib.py", line 188, in 
clear_ip_rate_limit
  neutron-l3-agent[14973]: ERROR neutron.agent.l3.agent filter_id = 
self._get_filterid_for_ip(qdisc_id, ip)
  neutron-l3-agent[14973]: ERROR neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/linux/l3_tc_lib.py", line 82, in 
_get_filterid_for_ip
  neutron-l3-agent[14973]: ERROR neutron.agent.l3.agent 
filterids_for_ip.append(filter_id)
  neutron-l3-agent[14973]: ERROR neutron.agent.l3.agent UnboundLocalError: 
local variable 'filter_id' referenced before assignment
  neutron-l3-agent[14973]: ERROR neutron.agent.l3.agent 

  From this log file:

  http://logs.openstack.org/01/626401/1/check/neutron-tempest-plugin-
  dvr-multinode-
  
scenario/a7bcbe2/controller/logs/screen-q-l3.txt.gz?level=WARNING#_Dec_19_23_26_00_169583

  The loop in _get_filterid_for_ip() doesn't look to protect against
  this case, we'd need to capture the output to see why it's failing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1809497/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
Mor

[Yahoo-eng-team] [Bug 1742467] Re: Compute unnecessarily gets resource provider aggregates during every update_available_resource run

2019-01-21 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/615677
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=deef31729bd54f3747b7adba4132f148559c2242
Submitter: Zuul
Branch:master

commit deef31729bd54f3747b7adba4132f148559c2242
Author: Eric Fried 
Date:   Mon Nov 5 16:04:10 2018 -0600

Reduce calls to placement from _ensure

Prior to this patch, the report client's update_from_provider_tree
method would, upon failure of any placement API call, invalidate the
cache *just* for the failing provider (and any descendants) and attempt
to continue operating on any other providers in the tree.

With this patch, we instead invalidate the tree around the failing
provider and fail right away.

In real life, since we don't yet have any implementations of nested,
this would have been effectively a null change.

Except: this allows us to resolve a TODO whereby we would *always*
_ensure_resource_provider (including a call to GET
/resource_providers?in_tree=$compute_rp) on every periodic. Now we can
optimize that out.

This should reduce the number of calls to placement per RT periodic to
zero in steady state when [compute]resource_provider_association_refresh
is zero.

Closes-Bug: #1742467

Change-Id: Ieeaad9783e0ff93377fbc6c7932618d2fac8946a


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1742467

Title:
  Compute unnecessarily gets resource provider aggregates during every
  update_available_resource run

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  New
Status in OpenStack Compute (nova) pike series:
  New

Bug description:
  This was provided by Kris Lindgren from GoDaddy on his Pike deployment
  that is now running with Placement.

  He noted that for every update_available_resource periodic task run,
  these are the calls to Placement if inventory didn't change:

  https://paste.ubuntu.com/26356656/

  So there are 5 GET requests in there.

  In this run, there isn't a call to get the resource provider itself
  because the SchedulerReportClient has it cached in the
  _resource_providers dict.

  But it still gets aggregates for the provider twice because it always
  wants to be up to date.

  The aggregates are put in the _provider_aggregate_map, however, that
  code is never used by anything since nova doesn't yet support resource
  provider aggregates since those are needed for shared resource
  providers, like a shared storage pool.

  Until nova supports shared providers, we likely should just comment
  the _provider_aggregate_map code out if nothing is using it to avoid
  the extra HTTP requests to Placement every minute (the default
  periodic task interval).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1742467/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1811941] Re: neutron routers are down

2019-01-21 Thread Brian Haley
** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1811941

Title:
  neutron routers are down

Status in neutron:
  Invalid

Bug description:
   it seems that Red Hat Openstack 14 has problems implementing Virtual
  Bridges .. it affects Neutron service (Networking) on Openstack ,
  Network interfaces on the Routers should not be DOWN state , I need a
  fix if there is any ... see attached

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1811941/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1573095] Re: 16.04 cloud image hangs at first boot

2019-01-21 Thread Robie Basak
Here are full steps to reproduce this issue using tooling from Ubuntu
only:

uvt-simplestreams-libvirt sync release=bionic arch=amd64 label=release
uvt-kvm create --no-start lp1573095 release=bionic arch=amd64 label=release
virsh edit lp1573095  # delete  and  blocks
virsh start lp1573095
uvt-kvm wait lp1573095

Expected behaviour: succeeds when the VM is available
Actual behaviour: hangs and eventually times out

Additionally you can examine the screen with virt-manager. On that screen, I
expect a login prompt. Instead I see nothing beyond the normal kernel messages
(nothing from userspace).

If you skip the serial/console definition deletion in the steps above, you'll
see that the VM works. In other words, the VM stops working if a serial port is
not available.

Workaround: remove console=ttyS0 from GRUB_CMD_LINUX_DEFAULT in
/etc/default/grub.d/50-cloudimg-settings.cfg, leaving only console=tty1, and
then run "sudo update-grub". However this must either be done on a system with
aserial port, or you have to jump through the appropriate hoops to be able to
get the result of "update-grub" happen without having booted the system. Note
that editing /etc/default/grub is insufficient since
/etc/default/grub.d/50-cloudimg-settings.cfg overrides it (see bug 1812752).


** Summary changed:

- 16.04 cloud image hangs at first boot
+ Cloud images fail to boot when a serial port is not available

** Changed in: ubuntu
   Status: Confirmed => Invalid

** Changed in: cloud-images
   Status: New => Confirmed

** Changed in: cloud-init
   Status: Fix Released => Invalid

** Description changed:

  I tried to launch a ubuntu 16.04 cloud image within KVM.
- The image is not booting up and hangs at 
+ The image is not booting up and hangs at
  
  "Btrfs loaded"
  
  Hypervisor env is Proxmox 4.1
+ 
+ [racb: see comment 40 for minimal steps to reproduce using Ubuntu-
+ provided tooling only)

** Description changed:

  I tried to launch a ubuntu 16.04 cloud image within KVM.
  The image is not booting up and hangs at
  
  "Btrfs loaded"
  
  Hypervisor env is Proxmox 4.1
  
  [racb: see comment 40 for minimal steps to reproduce using Ubuntu-
- provided tooling only)
+ provided tooling only]

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1573095

Title:
  Cloud images fail to boot when a serial port is not available

Status in cloud-images:
  Confirmed
Status in cloud-init:
  Invalid
Status in Ubuntu:
  Invalid

Bug description:
  I tried to launch a ubuntu 16.04 cloud image within KVM.
  The image is not booting up and hangs at

  "Btrfs loaded"

  Hypervisor env is Proxmox 4.1

  [racb: see comment 40 for minimal steps to reproduce using Ubuntu-
  provided tooling only]

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-images/+bug/1573095/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1812788] [NEW] Port with no active binding mark as dead

2019-01-21 Thread Pawel Suder
Public bug reported:

In change

- 
https://github.com/openstack/neutron/commit/5c3bf124966a310cbc6c8ffad5ab30b144d9d7aa#diff-9cca2f63ca397a7e93909a7119fdd16fL1582
- https://review.openstack.org/#/c/574058/

port is mark as dead in both cases:

- c_const.NO_ACTIVE_BINDING in details
- c_const.NO_ACTIVE_BINDING not in details

Question: Should be port mark as dead in first case?

Proposed change: https://review.openstack.org/#/c/632045

** Affects: neutron
 Importance: Undecided
 Assignee: Pawel Suder (pasuder)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1812788

Title:
  Port with no active binding mark as dead

Status in neutron:
  In Progress

Bug description:
  In change

  - 
https://github.com/openstack/neutron/commit/5c3bf124966a310cbc6c8ffad5ab30b144d9d7aa#diff-9cca2f63ca397a7e93909a7119fdd16fL1582
  - https://review.openstack.org/#/c/574058/

  port is mark as dead in both cases:

  - c_const.NO_ACTIVE_BINDING in details
  - c_const.NO_ACTIVE_BINDING not in details

  Question: Should be port mark as dead in first case?

  Proposed change: https://review.openstack.org/#/c/632045

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1812788/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp