[Yahoo-eng-team] [Bug 1682222] [NEW] Instance deployment failure due to neutron syntax error

2017-04-12 Thread Rajini Ram
Public bug reported:

See error below in n-cpu.log
Detailed logs available at: 
http://stash.opencrowbar.org/27/456127/2/check/dell-hw-tempest-dsvm-ironic-pxe_ipmitool/d29e3b6
 : 


2017-04-12 19:21:46.750 16295 DEBUG oslo_messaging._drivers.amqpdriver [-] 
received reply msg_id: cbd36fde5e9444f28f72acd31189cf31 __call__ 
/usr/local/lib/python2.7/d
ist-packages/oslo_messaging/_drivers/amqpdriver.py:346
2017-04-12 19:21:46.792 16295 ERROR oslo.service.loopingcall [-] Fixed interval 
looping call 'nova.virt.ironic.driver.IronicDriver._wait_for_active' failed
2017-04-12 19:21:46.792 16295 ERROR oslo.service.loopingcall Traceback (most 
recent call last):
2017-04-12 19:21:46.792 16295 ERROR oslo.service.loopingcall   File 
"/usr/local/lib/python2.7/dist-packages/oslo_service/loopingcall.py", line 137, 
in _run_loop
2017-04-12 19:21:46.792 16295 ERROR oslo.service.loopingcall result = 
func(*self.args, **self.kw)
2017-04-12 19:21:46.792 16295 ERROR oslo.service.loopingcall   File 
"/opt/stack/new/nova/nova/virt/ironic/driver.py", line 431, in _wait_for_active
2017-04-12 19:21:46.792 16295 ERROR oslo.service.loopingcall raise 
exception.InstanceDeployFailure(msg)
2017-04-12 19:21:46.792 16295 ERROR oslo.service.loopingcall 
InstanceDeployFailure: Failed to provision instance 
651a266c-ea66-472b-bc61-dafe4870fdd6: Failed to prepa
re to deploy. Error: Failed to load DHCP provider neutron, reason: invalid 
syntax (neutron.py, line 153)
2017-04-12 19:21:46.792 16295 ERROR oslo.service.loopingcall
2017-04-12 19:21:46.801 16295 ERROR nova.virt.ironic.driver 
[req-9b30e546-51d9-4e4f-b4bd-cc5d75118ea3 tempest-BaremetalBasicOps-247864778 
tempest-BaremetalBasicOps-24
7864778] Error deploying instance 651a266c-ea66-472b-bc61-dafe4870fdd6 on 
baremetal node 9ab67aec-921e-464d-8f2f-f9da65649a5e.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/168

Title:
  Instance deployment failure due to neutron syntax error

Status in OpenStack Compute (nova):
  New

Bug description:
  See error below in n-cpu.log
  Detailed logs available at: 
http://stash.opencrowbar.org/27/456127/2/check/dell-hw-tempest-dsvm-ironic-pxe_ipmitool/d29e3b6
 : 

  
  2017-04-12 19:21:46.750 16295 DEBUG oslo_messaging._drivers.amqpdriver [-] 
received reply msg_id: cbd36fde5e9444f28f72acd31189cf31 __call__ 
/usr/local/lib/python2.7/d
  ist-packages/oslo_messaging/_drivers/amqpdriver.py:346
  2017-04-12 19:21:46.792 16295 ERROR oslo.service.loopingcall [-] Fixed 
interval looping call 'nova.virt.ironic.driver.IronicDriver._wait_for_active' 
failed
  2017-04-12 19:21:46.792 16295 ERROR oslo.service.loopingcall Traceback (most 
recent call last):
  2017-04-12 19:21:46.792 16295 ERROR oslo.service.loopingcall   File 
"/usr/local/lib/python2.7/dist-packages/oslo_service/loopingcall.py", line 137, 
in _run_loop
  2017-04-12 19:21:46.792 16295 ERROR oslo.service.loopingcall result = 
func(*self.args, **self.kw)
  2017-04-12 19:21:46.792 16295 ERROR oslo.service.loopingcall   File 
"/opt/stack/new/nova/nova/virt/ironic/driver.py", line 431, in _wait_for_active
  2017-04-12 19:21:46.792 16295 ERROR oslo.service.loopingcall raise 
exception.InstanceDeployFailure(msg)
  2017-04-12 19:21:46.792 16295 ERROR oslo.service.loopingcall 
InstanceDeployFailure: Failed to provision instance 
651a266c-ea66-472b-bc61-dafe4870fdd6: Failed to prepa
  re to deploy. Error: Failed to load DHCP provider neutron, reason: invalid 
syntax (neutron.py, line 153)
  2017-04-12 19:21:46.792 16295 ERROR oslo.service.loopingcall
  2017-04-12 19:21:46.801 16295 ERROR nova.virt.ironic.driver 
[req-9b30e546-51d9-4e4f-b4bd-cc5d75118ea3 tempest-BaremetalBasicOps-247864778 
tempest-BaremetalBasicOps-24
  7864778] Error deploying instance 651a266c-ea66-472b-bc61-dafe4870fdd6 on 
baremetal node 9ab67aec-921e-464d-8f2f-f9da65649a5e.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/168/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1681945] [NEW] Neutron Agent error "constraint violation"

2017-04-11 Thread Rajini Ram
Public bug reported:

screen-q-agt.txt:2017-04-11 20:40:50.354 8722 ERROR 
neutron.agent.ovsdb.impl_vsctl [req-e330d428-6ab8-4594-8805-139a8ffa67b5 - -] 
Unable to execute [
'ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', 
'--id=@manager', 'create', 'Manager', 'target="ptcp:6640:127.0.0.1"', '--', 
'add', '
Open_vSwitch', '.', 'manager_options', '@manager']. Exception: Exit code: 1; 
Stdin: ; Stdout: ; Stderr: ovs-vsctl: transaction error: {"details":"Tra
nsaction causes multiple rows in \"Manager\" table to have identical values 
(\"ptcp:6640:127.0.0.1\") for index on column \"target\".  First row, wit
h UUID c84c9746-c000-45b5-b5df-c20a39aa4569, was inserted by this transaction.  
Second row, with UUID b80bee4e-6bea-4dcd-a3ac-12aa027dccf5, existed i
n the database before this transaction and was not modified by the 
transaction.","error":"constraint violation"}

Full logs: http://10.49.81.37/logs/12/453012/3/check/dell-hw-tempest-
dsvm-ironic-pxe_drac/ff03702/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1681945

Title:
  Neutron Agent error "constraint violation"

Status in neutron:
  New

Bug description:
  screen-q-agt.txt:2017-04-11 20:40:50.354 8722 ERROR 
neutron.agent.ovsdb.impl_vsctl [req-e330d428-6ab8-4594-8805-139a8ffa67b5 - -] 
Unable to execute [
  'ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', 
'--id=@manager', 'create', 'Manager', 'target="ptcp:6640:127.0.0.1"', '--', 
'add', '
  Open_vSwitch', '.', 'manager_options', '@manager']. Exception: Exit code: 1; 
Stdin: ; Stdout: ; Stderr: ovs-vsctl: transaction error: {"details":"Tra
  nsaction causes multiple rows in \"Manager\" table to have identical values 
(\"ptcp:6640:127.0.0.1\") for index on column \"target\".  First row, wit
  h UUID c84c9746-c000-45b5-b5df-c20a39aa4569, was inserted by this 
transaction.  Second row, with UUID b80bee4e-6bea-4dcd-a3ac-12aa027dccf5, 
existed i
  n the database before this transaction and was not modified by the 
transaction.","error":"constraint violation"}

  Full logs: http://10.49.81.37/logs/12/453012/3/check/dell-hw-tempest-
  dsvm-ironic-pxe_drac/ff03702/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1681945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1675151] [NEW] Switch to haproxy fails dhcp request

2017-03-22 Thread Rajini Ram
Public bug reported:

Looks like the patch to Switch ns-metadata-proxy to haproxy is breaking our 
Dell Ironic CI builds.
haproxy doesn't like the config file. 


The patch https://review.openstack.org/#/c/431691/ broke our Ironic CI builds, 
can you help look at the logs and help me fix?
Full logs
https://stash.opencrowbar.org/logs/71/446571/12/check/dell-hw-tempest-dsvm-ironic-pxe_ipmitool/3a6e628/
 
f6ecb-068d-4c89-8fe9-c4c14f0228ee.pid get_value_from_file 
/opt/stack/new/neutron/neutron/agent/linux/utils.py:252
2017-03-22 16:30:10.365 15528 DEBUG neutron.agent.metadata.driver 
[req-19ee385c-6764-4d9f-bf50-eab47d4bab38 - -] haproxy_cfg =
global
log /dev/log local0 debug
userstack
group   stack
maxconn 1024
pidfile 
/opt/stack/data/neutron/external/pids/075f6ecb-068d-4c89-8fe9-c4c14f0228ee.pid
daemon
 
defaults
log global
mode http
option httplog
option dontlognull
option http-server-close
option forwardfor
retries 3
timeout http-request30s
timeout connect 30s
timeout client  32s
timeout server  32s
timeout http-keep-alive 30s
 
listen listener
bind 0.0.0.0:80
server metadata /opt/stack/data/neutron/metadata_proxy
http-request add-header X-Neutron-Network-ID 
075f6ecb-068d-4c89-8fe9-c4c14f0228ee
create_config_file /opt/stack/new/neutron/neutron/agent/metadata/driver.py:126
2017-03-22 16:30:10.366 15528 DEBUG neutron.agent.linux.utils 
[req-19ee385c-6764-4d9f-bf50-eab47d4bab38 - -] Running command (rootwrap 
daemon): ['ip', 'netns', 'exec'
, 'qdhcp-075f6ecb-068d-4c89-8fe9-c4c14f0228ee', 'haproxy', '-f', 
'/opt/stack/data/neutron/ns-metadata-proxy/075f6ecb-068d-4c89-8fe9-c4c14f0228ee.conf']
 execute_rootwr
ap_daemon /opt/stack/new/neutron/neutron/agent/linux/utils.py:108
2017-03-22 16:30:10.440 15528 ERROR neutron.agent.linux.utils 
[req-19ee385c-6764-4d9f-bf50-eab47d4bab38 - -] Exit code: 1; Stdin: ; Stdout: ; 
Stderr: [ALERT] 080/1630
10 (16482) : parsing 
[/opt/stack/data/neutron/ns-metadata-proxy/075f6ecb-068d-4c89-8fe9-c4c14f0228ee.conf:26]
 : Unknown host in '/opt/stack/data/neutron/metadata_prox
y'
[ALERT] 080/163010 (16482) : parsing 
[/opt/stack/data/neutron/ns-metadata-proxy/075f6ecb-068d-4c89-8fe9-c4c14f0228ee.conf:27]:
 unknown parameter 'X-Neutron-Network-ID
', expects 'allow', 'deny', 'auth'.
[ALERT] 080/163010 (16482) : Error(s) found in configuration file : 
/opt/stack/data/neutron/ns-metadata-proxy/075f6ecb-068d-4c89-8fe9-c4c14f0228ee.conf
 
2017-03-22 16:30:10.440 15528 DEBUG oslo_concurrency.lockutils 
[req-19ee385c-6764-4d9f-bf50-eab47d4bab38 - -] Releasing semaphore 
"dhcp-agent-network-lock-075f6ecb-06
8d-4c89-8fe9-c4c14f0228ee" lock 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:225
2017-03-22 16:30:10.441 15528 ERROR oslo_messaging.rpc.server 
[req-19ee385c-6764-4d9f-bf50-eab47d4bab38 - -] Exception during message handling
2017-03-22 16:30:10.441 15528 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
2017-03-22 16:30:10.441 15528 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/o
017-03-22 16:30:10.441 15528 ERROR oslo_messaging.rpc.server 
[req-19ee385c-6764-4d9f-bf50-eab47d4bab38 - -] Exception during message handling
2017-03-22 16:30:10.441 15528 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
2017-03-22 16:30:10.441 15528 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
157, in _process_inco
ming
2017-03-22 16:30:10.441 15528 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
2017-03-22 16:30:10.441 15528 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
213, in dispatch
2017-03-22 16:30:10.441 15528 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
2017-03-22 16:30:10.441 15528 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
183, in _do_dispa
tch
2017-03-22 16:30:10.441 15528 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
2017-03-22 16:30:10.441 15528 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/osprofiler/profiler.py", line 153, in 
wrapper
2017-03-22 16:30:10.441 15528 ERROR oslo_messaging.rpc.server return 
f(*args, **kwargs)
2017-03-22 16:30:10.441 15528 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/neutron/neutron/agent/dhcp/agent.py", line 61, in wrapped
2017-03-22 16:30:10.441 15528 ERROR oslo_messaging.rpc.server return 
f(*args, **kwargs)
2017-03-22 16:30:10.441 15528 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/neutron/neutron/agent/dhcp/agent.py", line 363, in 
network_create_end
2017-03-22 16:30:10.441 15528 ERROR oslo_messaging.rpc.server 

[Yahoo-eng-team] [Bug 1477163] [NEW] When we create instance with Ubuntu 15.0.4 image, default route is missing

2015-07-22 Thread Ram Kumar
Public bug reported:

We have issues with new image 15.0.4 (Ubuntu). When we create new
instance in Ice House with older version of Ubuntu or other flavor of
Linux, it is working fine but with 15.0.4 image, default route is not
created. It is resolving when we add default route manually.

Thanks,

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1477163

Title:
  When we create instance with Ubuntu 15.0.4 image, default route is
  missing

Status in OpenStack Compute (nova):
  New

Bug description:
  We have issues with new image 15.0.4 (Ubuntu). When we create new
  instance in Ice House with older version of Ubuntu or other flavor of
  Linux, it is working fine but with 15.0.4 image, default route is not
  created. It is resolving when we add default route manually.

  Thanks,

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1477163/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338735] [NEW] Live-migration with volumes creates orphan access records

2014-07-07 Thread Rajini Ram
Public bug reported:

When live migration is performed on instances with volume attached, nova
sends two initiator commands and one terminate connection. This causes
orphan access records in some storage arrays ( tested with Dell
EqualLogic Driver).

Steps to reproduce:
1. Have one controller and two compute node setup. Setup cinder volume on iscsi 
or a storage array.
2. Create an instance
3. Create a volume and attach it to the instance
4. Check the location of the instance ( computenode1 or 2)
nova instance1 show
4. Perform live migration of the instance and move it to the second compute node
nova live-migration instance1 computenode2
5. Check the cinder api log. c-api
There will be two os-initialize_connection and one os-terminate-connection. 
There should only be one set of initialize and terminate connection

** Affects: cinder
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: cinder live-migration nova

** Description changed:

  When live migration is performed on instances with volume attached, nova
- sends two initiator commands and one ter5.minate connection. This causes
+ sends two initiator commands and one terminate connection. This causes
  orphan access records in some storage arrays ( tested with Dell
  EqualLogic Driver).
  
  Steps to reproduce:
- 1. Have one controller and two compute node setup. Setup cinder volume on 
iscsi or a storage array. 
+ 1. Have one controller and two compute node setup. Setup cinder volume on 
iscsi or a storage array.
  2. Create an instance
- 3. Create a volume and attach it to the instance 
+ 3. Create a volume and attach it to the instance
  4. Check the location of the instance ( computenode1 or 2)
  nova instance1 show
  4. Perform live migration of the instance and move it to the second compute 
node
  nova live-migration instance1 computenode2
  5. Check the cinder api log. c-api
  There will be two os-initialize_connection and one os-terminate-connection. 
There should only be one set of initialize and terminate connection

** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1338735

Title:
  Live-migration with volumes creates orphan access records

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  When live migration is performed on instances with volume attached,
  nova sends two initiator commands and one terminate connection. This
  causes orphan access records in some storage arrays ( tested with Dell
  EqualLogic Driver).

  Steps to reproduce:
  1. Have one controller and two compute node setup. Setup cinder volume on 
iscsi or a storage array.
  2. Create an instance
  3. Create a volume and attach it to the instance
  4. Check the location of the instance ( computenode1 or 2)
  nova instance1 show
  4. Perform live migration of the instance and move it to the second compute 
node
  nova live-migration instance1 computenode2
  5. Check the cinder api log. c-api
  There will be two os-initialize_connection and one os-terminate-connection. 
There should only be one set of initialize and terminate connection

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1338735/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1322139] Re: VXLAN kernel requirement check for openvswitch agent is not working

2014-05-29 Thread ram
** Changed in: neutron
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1322139

Title:
  VXLAN kernel requirement check for openvswitch agent is not working

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  
  on RHEL7 beta agent set to use VXLAN tunneling does not start.
  I'm using rdo packages, if you want I can check the upstream version 
somewhere, but seems that code that checks the version is still in github.

  in openvswitch-agent.log I see:

  2014-05-21 13:53:21.762 1814 ERROR 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-ce4eadcb-4cbb-4c09-a404-98ecb5383fa5 None] Agent terminated
  2014-05-21 13:53:21.762 1814 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback (most recent call 
last):
  2014-05-21 13:53:21.762 1814 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/usr/lib/python2.7/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
 line 231, in _check_ovs_version
  2014-05-21 13:53:21.762 1814 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
ovs_lib.check_ovs_vxlan_version(self.root_helper)
  2014-05-21 13:53:21.762 1814 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/linux/ovs_lib.py, line 551, in 
check_ovs_vxlan_version
  2014-05-21 13:53:21.762 1814 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 'kernel', 'VXLAN')
  2014-05-21 13:53:21.762 1814 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/linux/ovs_lib.py, line 529, in 
_compare_installed_and_required_version
  2014-05-21 13:53:21.762 1814 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent raise SystemError(msg)
  2014-05-21 13:53:21.762 1814 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent SystemError: Unable to 
determine kernel version for Open vSwitch with VXLAN support. To use VXLAN 
tunnels with OVS, please ensure that the version is 1.10 or newer!
  2014-05-21 13:53:21.762 1814 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 

  It seems that the minimum kernel version required to use VXLAN is set
  to 3.13 (shouldn't 3.9 be enough ?). RHEL7 ships only 3.10. Vxlan
  module however is present and working, and even if the only module
  properly working is in 3.13 kernel, the check doesn't take into
  consideration backported features.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1322139/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301337] Re: We have three duplicated tests for check_ovs_vxlan_version

2014-05-29 Thread ram
** Changed in: neutron
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1301337

Title:
  We have three duplicated tests for check_ovs_vxlan_version

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  test_ovs_lib, test_ovs_neutron_agent and test_ofa_neutron_agent have 
duplicated same unit tests for check_ovs_vxlan_version. The only difference is 
SystemError (from ovs_lib) and SystemExit (from agents).
  The tested logic is 99% same, and unit tests in ovs/ofa agent looks 
unnecessary.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1301337/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp