[Yahoo-eng-team] [Bug 1459065] [NEW] Unable to update the user - unable to retrieve user list

2015-05-26 Thread Canh Truong
Public bug reported:

The steps to produce the bug:
 1/ Login to openstack with user name: admin
 2/ Go to Identity - Users - Edit  -- To update admin user
 3/ Choose primary project for admin user is admin   -- Update user successful 
 4/ go to Edit of admin user again and choose primary project is demo -- 
update user -- getting the following errors:
Error: Unable to update the user.
Error: Unauthorized: Unable to retrieve user list.

But if signing  out Openstack and sign in again with admin user -- The
user list is updated normally.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: horizon keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1459065

Title:
  Unable to update the user - unable to retrieve user list

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The steps to produce the bug:
   1/ Login to openstack with user name: admin
   2/ Go to Identity - Users - Edit  -- To update admin user
   3/ Choose primary project for admin user is admin   -- Update user 
successful 
   4/ go to Edit of admin user again and choose primary project is demo -- 
update user -- getting the following errors:
  Error: Unable to update the user.
  Error: Unauthorized: Unable to retrieve user list.

  But if signing  out Openstack and sign in again with admin user --
  The user list is updated normally.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1459065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459073] [NEW] Multipath device descriptor is not deleted after volume detached when FC and multipath are used

2015-05-26 Thread Tina Tang
Public bug reported:

When Fibre Channel and multipath are used, the multipath device
descriptor is not deleted after a volume is detached from a VM. This is
always reproducible.

This was seen with git stable/kilo.

Reproduce steps:
1. Check the multipath device on the system
stack@ubuntu-server12:/proc/sys/dev/scsi$ sudo multipath -ll
3600508e0c57de73d39c00b0e dm-0 LSI,Logical Volume
size=222G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  `- 6:1:1:0  sdb 8:16  active ready  running

2. Attach a volume to a VM runing on the current system, and wait till the 
volume becomes to 'in-use'
stack@ubuntu-server12:/proc/sys/dev/scsi$ nova volume-attach 
c6244601-73e0-4210-8654-356a4154883b fa7c94ae-e666-4923-be2b-8965036dce1a
+--+--+
| Property | Value|
+--+--+
| device   | /dev/vdb |
| id   | fa7c94ae-e666-4923-be2b-8965036dce1a |
| serverId | c6244601-73e0-4210-8654-356a4154883b |
| volumeId | fa7c94ae-e666-4923-be2b-8965036dce1a |
+--+--+
stack@ubuntu-server12:/proc/sys/dev/scsi$ cinder list
+--+---+---+--+-+--+--+
|  ID  |   Status  |Name   | Size | Volume 
Type | Bootable | Attached to  |
+--+---+---+--+-+--+--+
| fa7c94ae-e666-4923-be2b-8965036dce1a |   in-use  |vol1   |  1   | 
None|  false   | c6244601-73e0-4210-8654-356a4154883b |
+--+---+---+--+-+--+--+

3. Check the multipath devices on the system. 3600601602ba03400efd6d0247c03e511 
 is the multipath device mapping to the attached volume
stack@ubuntu-server12:/proc/sys/dev/scsi$ sudo multipath -ll
3600601602ba03400efd6d0247c03e511 dm-1 DGC,VRAID
size=1.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=130 status=active
| `- 7:0:4:225 sdp 8:240 active ready  running
`-+- policy='round-robin 0' prio=10 status=enabled
  `- 7:0:2:225 sdo 8:224 active ready  running
3600508e0c57de73d39c00b0e dm-0 LSI,Logical Volume
size=222G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  `- 6:1:1:0   sdb 8:16  active ready  running

4. Detach the volume and wait till the volume becomes 'available'

stack@ubuntu-server12:/proc/sys/dev/scsi$ nova volume-detach 
c6244601-73e0-4210-8654-356a4154883b fa7c94ae-e666-4923-be2b-8965036dce1a
stack@ubuntu-server12:/proc/sys/dev/scsi$ cinder list
+--+---+---+--+-+--+-+
|  ID  |   Status  |Name   | Size | Volume 
Type | Bootable | Attached to |
+--+---+---+--+-+--+-+
| fa7c94ae-e666-4923-be2b-8965036dce1a | available |vol1   |  1   | 
None|  false   | |
+--+---+---+--+-+--+-+


5. Check the multipath devices again. We can see the descriptor 
3600601602ba03400efd6d0247c03e511 still exists
stack@ubuntu-server12:/proc/sys/dev/scsi$ sudo multipath -ll 
3600601602ba03400efd6d0247c03e511
3600601602ba03400efd6d0247c03e511 dm-1 ,
size=1.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=-1 status=active
| `- #:#:#:#  -   #:#   active undef running
`-+- policy='round-robin 0' prio=-1 status=enabled
  `- #:#:#:#  -   #:#   active undef running


Code Version:
stack@ubuntu-server12:/opt/stack/nova$ git status
On branch stable/kilo
Your branch is up-to-date with 'origin/stable/kilo'.

nothing to commit, working directory clean
stack@ubuntu-server12:/opt/stack/nova$ git log -1
commit 36fb00291d819b65f46d530eefbf07b883ca8a29
Merge: 0c60aca 8e9ecf7
Author: Jenkins jenk...@review.openstack.org
Date:   Fri May 15 16:16:00 2015 +

Merge Libvirt: Use tpool to invoke guestfs api into stable/kilo

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1459073

Title:
  Multipath device descriptor is not deleted after volume detached when
  FC and multipath are used

Status in OpenStack Compute (Nova):
  New

Bug description:
  When Fibre Channel and multipath are used, the multipath device
  descriptor is not deleted after a volume is detached from a VM. This
  is always 

[Yahoo-eng-team] [Bug 1459074] [NEW] Backup source is not selected when click trove backup restore button.

2015-05-26 Thread WonChon
Public bug reported:

In database backups panel, when user click restore backup button, Launch 
Instance popup window was appeared.
But, Source for Initial State in Adanced tap is not selected to Restore 
from Backup.
So, if user not selected Restore from Backup manually, restore can not 
success. 
Because, backup_info is not submit when create database instance.

** Affects: horizon
 Importance: Undecided
 Assignee: WonChon (arisu1000)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = WonChon (arisu1000)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1459074

Title:
  Backup source is not selected when click trove backup restore
  button.

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In database backups panel, when user click restore backup button, Launch 
Instance popup window was appeared.
  But, Source for Initial State in Adanced tap is not selected to Restore 
from Backup.
  So, if user not selected Restore from Backup manually, restore can not 
success. 
  Because, backup_info is not submit when create database instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1459074/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459051] [NEW] Glance v2 Tasks Creation should be non-blocking

2015-05-26 Thread Sabari Murugesan
Public bug reported:

The glance v2 tasks creation api is blocking, in the sense, when a task
is created the 201 response is not received until the task is complete.
Also, when the response is received the task is in the pending state
while the task in-effect could have succeeded or failed.

We need to make it non-blocking by either running the executor in a
separate eventlet thread or offload it to a task worker.

** Affects: glance
 Importance: Undecided
 Assignee: Sabari Murugesan (smurugesan)
 Status: In Progress

** Changed in: glance
 Assignee: (unassigned) = Sabari Murugesan (smurugesan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1459051

Title:
  Glance v2 Tasks Creation should be non-blocking

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress

Bug description:
  The glance v2 tasks creation api is blocking, in the sense, when a
  task is created the 201 response is not received until the task is
  complete. Also, when the response is received the task is in the
  pending state while the task in-effect could have succeeded or failed.

  We need to make it non-blocking by either running the executor in a
  separate eventlet thread or offload it to a task worker.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1459051/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456809] Re: L3-agent not recreating missing fg- device

2015-05-26 Thread Eugene Nikanorov
Agree with Itzik's analysis.
Closing as 'Invalid'

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1456809

Title:
  L3-agent not recreating missing fg- device

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  When using DVR, the fg device in a compute is needed to access VMs on
  a compute node.  If for any reason the fg- device is deleted.  users
  will not be able access the VMs on the compute node.

  On a single node system where the L3-agent is running in 'dvr-snat'
  mode, a VM is booted up and assigned a floating-ip.  The VM is
  pingable using the floating IP.  Now I go into the fip namespace and
  delete the fg device using the command ovs-vsctl del-port br-ex fg-
  ccbd7bcb-75.Now the the VM can no longer be pinged.

  Then another VM is booted up and it is also assigned a Floating IP.
  The new VM is not pingable either.

  The L3-agent log shows it reported that it cannot find fg-ccbd7bcb-75
  when setting up the qrouter and fip namespaces for the new floating
  IP.  But it didn't not go and re-create the fg- device.

  Given that this is a deliberate act to cause the cloud  to fail, the
  L3-agent could have gone ahead and re-create the fg device to make it
  more fault tolerant.

  The problem can be reproduced with the latest neutron code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1456809/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439870] Re: Fixed IPs not being recorded in database

2015-05-26 Thread Eugene Nikanorov
this is not a neutron bug, please be more attentive.

** Project changed: neutron = nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1439870

Title:
  Fixed IPs not being recorded in database

Status in OpenStack Compute (Nova):
  New

Bug description:
  When new VMs are spawned after deleting previous VMs, the new VMs
  obtain completely new ips and the old ones are not recycled to reuse.
  I looked into the mysql database to see where ips may be being stored
  and accessed by openstack to determine what the next in line should
  be, but didn' tmanage to find any ip information there. Has the
  location of this storage changed out of the fixed_ips table?
  Currently, this table is entirely empty:

  MariaDB [nova] select * from fixed_ips;
  Empty set (0.00 sec)

  despite having many vms running on two different networks:

   mysql -e select uuid, deleted, power_state, vm_state, display_name, host 
from nova.instances;
  
+--+-+-+--+--+--+
  | uuid | deleted | power_state | vm_state | 
display_name | host |
  
+--+-+-+--+--+--+
  | 14600536-7ce1-47bf-8f01-1a184edb5c26 |   0 |   4 | error| 
Ctest| r001ds02.pcs |
  | abb38321-5b74-4f36-b413-a057897b8579 |   0 |   4 | stopped  | 
cent7| r001ds02.pcs |
  | 31cbb003-42d0-468a-be4d-81f710e29aef |   0 |   1 | active   | 
centos7T2| r001ds02.pcs |
  | 4494fd8d-8517-4f14-95e6-fe5a6a64b331 |   0 |   1 | active   | 
selin_test   | r001ds02.pcs |
  | 25505dc4-2ba9-480d-ba5a-32c2e91fc3c9 |   0 |   1 | active   | 
2NIC | r001ds02.pcs |
  | baff8cef-c925-4dfb-ae90-f5f167f32e83 |   0 |   4 | stopped  | 
kepairtest   | r001ds02.pcs |
  | 317e1fbf-664d-43a8-938a-063fd53b801d |   0 |   1 | active   | 
test | r001ds02.pcs |
  | 3a8c1a2d-1a4b-4771-8e62-ab1982759ecd |   0 |   1 | active   | 3 
   | r001ds02.pcs |
  | c4b2175a-296c-400c-bd54-16df3b4ca91b |   0 |   1 | active   | 
344  | r001ds02.pcs |
  | ac02369e-b426-424d-8762-71ca93eacd0c |   0 |   4 | stopped  | 
333  | r001ds02.pcs |
  | 504d9412-e2a3-492a-8bc1-480ce6249f33 |   0 |   1 | active   | 
libvirt  | r001ds02.pcs |
  | cc9f6f06-2ba6-4ec2-94f7-3a795aa44cc4 |   0 |   1 | active   | 
arger| r001ds02.pcs |
  | 0a247dbf-58b4-4244-87da-510184a92491 |   0 |   1 | active   | 
arger2   | r001ds02.pcs |
  | 4cb85bbb-7248-4d46-a9c2-fee312f67f96 |   0 |   1 | active   | 
gh   | r001ds02.pcs |
  | adf9de81-3986-4d73-a3f1-a29d289c2fe3 |   0 |   1 | active   | 
az   | r001ds02.pcs |
  | 8396eabf-d243-4424-8ec8-045c776e7719 |   0 |   1 | active   | 
sdf  | r001ds02.pcs |
  | 947905b5-7a2c-4afb-9156-74df8ed699c5 |  55 |   1 | deleted  | 
yh   | r001ds02.pcs |
  | f690d7ed-f8d5-45a1-b679-e79ea4d3366f |  56 |   1 | deleted  | 
tr   | r001ds02.pcs |
  | dd1aa5b1-c0ac-41f6-a6de-05be8963242f |  57 |   1 | deleted  | 
ig   | r001ds02.pcs |
  | 42688a7d-2ba2-4d5a-973f-e87f87c32326 |  58 |   1 | deleted  | 
td   | r001ds02.pcs |
  | 7c1014d8-237d-48f0-aa77-3aa09fff9101 |  59 |   1 | deleted  | 
td2  | r001ds02.pcs |
  
+--+-+-+--+--+--+

  I am using neutron networking with OVS.  It is my understanding that
  the mysql sqlalchemy is setup to leave old information accessible in
  mysql, but deleting the associated information manually doesn't seem
  to make a difference as to the fixed_ips issue I am experiencing. Are
  there solutions for this?

  nova --version : 2.20.0 ( 2014.2.1-1.el7 running on centOS7, epel-juno
  release)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1439870/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458890] Re: Add segment support to Neutron

2015-05-26 Thread Kyle Mestery
Per the new specs process [1], this is filed as a feature request. The
requestors (large ops deployers) have noted this as a feature they want
to have, but they don't have the manpower to develop it themselves.
Thus, it's an RFE for now.

[1] https://review.openstack.org/177342

** Changed in: neutron
   Status: Opinion = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458890

Title:
  Add segment support to Neutron

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  This is feedback from the Vancouver OpenStack Summit.

  During the large deployment team (Go Daddy, Yahoo!, NeCTAR, CERN,
  Rackspace, HP, BlueBox, among others) meeting, there was a discussion
  of network architectures that we use to deliver Openstack.  As we
  talked it became clear that there are a number of challenges around
  networking.

  In many cases, our data center networks are architected with a
  differientation between layer 2 and layer 3.  Said another way, there
  are distinct network segments which are only available to a subset
  of compute hosts.  These topolgies are typically necessary to manage
  network resource capacity (IP addresses, broadcast domain size, ARP
  tables, etc.)   Network topologies like these are not possible to
  describe with Neutron constructs today.

  The traditional solution to this is tunneling and overlay networks
  which makes all networks available everywhere in the data center.
  However, overlay networks represent a large increase in complexity
  that can be very difficult to troubleshoot.  For this reason, many
  large deployers are not using overlay networks at all (or only for
  specific use cases like private tenant networks.)

  Beacuse Neutron does not have constructs that accurately describe our
  network architectures, we'd like to see the notion of a network
  segment in Neutron.  A segment could mean a L2 domain, IP block
  boundary, or other partition.  Operators could use this new construct
  to build accurate models of network topology within Neutron, making it
  much more usable.

  Example:  The typical use case is L2 segments that are restrained
  to a single rack (or some subnet of compute hosts), but are still part
  of a larger L3 network.  In this case, the overall Neutron network
  would describe the L3 network, and the network segments would be used
  to describe the L2 segments.

  
  WIth the network segment construct (which are not intended to be exposed to 
end users ), there is also a need for some scheduling logic around placement 
and addressing of instances on an appropriate network segment based on 
availablity and capacity.  This also implies a means via API to report IP 
capacity of networks and segments, so we can filter out segments without 
capacity and the compute nodes that are tied to those segments.

  Example:  The end user chooses the Neutron network for their
  instance, which is actually comprised of several lower level network
  segments within Neutron.  Scheduling must be done such that the
  network segment chosen for the instance is available to the compute
  node on which the instance is placed.  Additionally, the network
  segment that's chosen must have available IP capacity in order for the
  instance to be placed there.

  
  Also, the scheduling for resize, migrate, ... should only consider the 
compute nodes allowed in the network segment where the VM is placed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1458890/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1377161] Re: If volume-attach API is failed, Block Device Mapping record will remain

2015-05-26 Thread Mehdi Abaakouk
** Changed in: oslo.messaging
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1377161

Title:
  If volume-attach API is failed, Block Device Mapping record will
  remain

Status in Cinder:
  Invalid
Status in OpenStack Compute (Nova):
  In Progress
Status in Messaging API for OpenStack:
  Invalid
Status in Python client library for Cinder:
  Invalid

Bug description:
  I executed volume-attach API(nova V2 API) when RabbitMQ was down.
  As result of above API execution, volume-attach API was failed and volume's 
status is still available.
  But, block device mapping record remains on nova DB.
  This condition is inconsistency.

  And, remained block device mapping record maybe cause some problems.
  (I'm researching now.)

  I used openstack juno-3.

  
--
  * Before executing volume-attach API:

  $ nova list
  
+--++++-++
  | ID   | Name   | Status | Task State | Power 
State | Networks   |
  
+--++++-++
  | 0b529526-4c8d-4650-8295-b7155a977ba7 | testVM | ACTIVE | -  | 
Running | private=10.0.0.104 |
  
+--++++-++
  $ cinder list
  
+--+---+--+--+-+--+-+
  |  ID  |   Status  | Display Name | Size | 
Volume Type | Bootable | Attached to |
  
+--+---+--+--+-+--+-+
  | e93478bf-ee37-430f-93df-b3cf26540212 | available | None |  1   |
 None|  false   | |
  
+--+---+--+--+-+--+-+
  devstack@ubuntu-14-04-01-64-juno3-01:~$

  mysql select * from block_device_mapping where instance_uuid = 
'0b529526-4c8d-4650-8295-b7155a977ba7';
  
+-+-++-+-+---+-+---+-+---+-+--+-+-+--+--+-+--++--+
  | created_at  | updated_at  | deleted_at | id  | device_name 
| delete_on_termination | snapshot_id | volume_id | volume_size | no_device | 
connection_info | instance_uuid| deleted | source_type 
| destination_type | guest_format | device_type | disk_bus | boot_index | 
image_id |
  
+-+-++-+-+---+-+---+-+---+-+--+-+-+--+--+-+--++--+
  | 2014-10-02 18:36:08 | 2014-10-02 18:36:10 | NULL   | 145 | /dev/vda
| 1 | NULL| NULL  |NULL |  NULL | 
NULL| 0b529526-4c8d-4650-8295-b7155a977ba7 |   0 | image   
| local| NULL | disk| NULL |  0 | 
c1d264fd-c559-446e-9b94-934ba8249ae1 |
  
+-+-++-+-+---+-+---+-+---+-+--+-+-+--+--+-+--++--+
  1 row in set (0.00 sec)

  * After executing volume-attach API:
  $ nova list --all-t
  
+--++++-++
  | ID   | Name   | Status | Task State | Power 
State | Networks   |
  
+--++++-++
  | 0b529526-4c8d-4650-8295-b7155a977ba7 | testVM | ACTIVE | -  | 
Running | private=10.0.0.104 |
  
+--++++-++
  $ cinder list
  
+--+---+--+--+-+--+-+
  |  ID  |   Status  | Display Name | Size | 

[Yahoo-eng-team] [Bug 1458786] [NEW] Update port security group, relevant ipset member can't be updated

2015-05-26 Thread shihanzhang
Public bug reported:

reproduce step:

1.  VM1 in security group A
2.  VM2 in security group B
3.  security group B can access security group A
4.  update VM1 to security group C

I found that VM1 ip address was still in ipset members which belongs to
security group A, but now VM1 was already in security group C

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458786

Title:
  Update port security group, relevant ipset member can't be updated

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  reproduce step:

  1.  VM1 in security group A
  2.  VM2 in security group B
  3.  security group B can access security group A
  4.  update VM1 to security group C

  I found that VM1 ip address was still in ipset members which belongs
  to security group A, but now VM1 was already in security group C

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1458786/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458769] [NEW] horizon can't update subnet ip pool

2015-05-26 Thread reachlin
Public bug reported:

the update of ip pool in subnet reports success, but the refresh shows
the data is not changed.

steps to recreate:
1. admin/network/subnet
2. edit subnet/details/allocation pools
3. save the changes
4. check the subnet detail after success message shows

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1458769

Title:
  horizon can't update subnet ip pool

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  the update of ip pool in subnet reports success, but the refresh shows
  the data is not changed.

  steps to recreate:
  1. admin/network/subnet
  2. edit subnet/details/allocation pools
  3. save the changes
  4. check the subnet detail after success message shows

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1458769/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458770] [NEW] OvsdbMonitor module should respawn monitor only on specific failures

2015-05-26 Thread Sonu
Public bug reported:

As of today, OvsdbMonitor used in neutron-openswitch-agent module, re-spawns 
the monitor in case of any verb that appears on the stderr of the child monitor 
process irrespective of the severity or relevance.
It is not ideal to restart child monitor process when the errors are not fatal 
or error, such as warnings from a plugin driver that echos warnings to the 
stderr etc.

And if such errors appears periodically, there are two side effects  
a) Too frequent monitor processes could result in db.sock contention.
b) Frequent re-starts, would result in neutron agent missing interface 
additions. 

In ideal case, only fatal errors where it would effect the monitoring or
change detection on bridges ports and interfaces should result in
restart of the monitor, not like the case as it exists today.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458770

Title:
  OvsdbMonitor module should respawn monitor only on specific failures

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  As of today, OvsdbMonitor used in neutron-openswitch-agent module, re-spawns 
the monitor in case of any verb that appears on the stderr of the child monitor 
process irrespective of the severity or relevance.
  It is not ideal to restart child monitor process when the errors are not 
fatal or error, such as warnings from a plugin driver that echos warnings to 
the stderr etc.

  And if such errors appears periodically, there are two side effects  
  a) Too frequent monitor processes could result in db.sock contention.
  b) Frequent re-starts, would result in neutron agent missing interface 
additions. 

  In ideal case, only fatal errors where it would effect the monitoring
  or change detection on bridges ports and interfaces should result in
  restart of the monitor, not like the case as it exists today.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1458770/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458800] [NEW] No hostname validation

2015-05-26 Thread Marcos Lobo
Public bug reported:

Horizon Juno does not have hostname validation in instance creation
form. For example, you can create a new instance with name: #3r

That name will cause problems when Nova try to handle, but Horizon allow
launch a new instance with a not valid hostname. A valid hostname is
specified in RFC1123 (http://tools.ietf.org/html/rfc1123).

I propose add hostname validation to create/edit instance form following
the RFC1123.

** Affects: horizon
 Importance: Undecided
 Assignee: Ritesh (rsritesh)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1458800

Title:
  No hostname validation

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Horizon Juno does not have hostname validation in instance creation
  form. For example, you can create a new instance with name: #3r

  That name will cause problems when Nova try to handle, but Horizon
  allow launch a new instance with a not valid hostname. A valid
  hostname is specified in RFC1123 (http://tools.ietf.org/html/rfc1123).

  I propose add hostname validation to create/edit instance form
  following the RFC1123.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1458800/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458477] Re: Flavors are not sorted when launching instance

2015-05-26 Thread Ritesh
Actually it is not a bug, Currently we have flavor sorting based on
`RAM`.

IMHO, it is correct to have this , if all people agree then we can
change from `RAM` to `Name` based sorting.

** Changed in: horizon
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1458477

Title:
  Flavors are not sorted when launching instance

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When you  click on Instances - Launch Instance then values of flavors in 
drop down menu  are not sorted. I have 16 flavors and it is already hard for me 
to find correct one.
  It would be nice if this form's field can be sorted by alphabet.

  I found this in OS Icehouse.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1458477/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458809] [NEW] Unable to delete instances created using stale networks

2015-05-26 Thread Sudipta Biswas
Public bug reported:

I am on Kilo.

I was using VxLAN based networks. 
As the lab requirement changed, I had to move over to FLAT networking.
This involved editing the ml2_conf.ini file and the necessary changes for 
'flat' networking to work.
However, I didn't enable VxLAN networking any longer - even though the networks 
pre-created (using VxLAN) were still lying there. (This wasn't intentional).

Again without actual intentions, I ended up deploying an instance with the 
VxLAN based networks.
This results into a build failure on the compute node with the following 
exception:

Unable to clear device ID for port 'None'
 TRACE nova.network.neutronv2.api Traceback (most recent call last):
 TRACE nova.network.neutronv2.api   File 
/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py, line 366, in 
_unbind_ports
 TRACE nova.network.neutronv2.api port_client.update_port(port_id, 
port_req_body)
 TRACE nova.network.neutronv2.api   File 
/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py, line 102, in 
with_params
 TRACE nova.network.neutronv2.api ret = self.function(instance, *args, 
**kwargs)
 TRACE nova.network.neutronv2.api   File 
/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py, line 549, in 
update_port
 TRACE nova.network.neutronv2.api return self.put(self.port_path % (port), 
body=body)
 TRACE nova.network.neutronv2.api   File 
/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py, line 302, in 
put
TRACE nova.network.neutronv2.api headers=headers, params=params)
 TRACE nova.network.neutronv2.api   File 
/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py, line 270, in 
retry_request
 TRACE nova.network.neutronv2.api headers=headers, params=params)
 TRACE nova.network.neutronv2.api   File 
/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py, line 211, in 
do_request
 TRACE nova.network.neutronv2.api self._handle_fault_response(status_code, 
replybody)
 TRACE nova.network.neutronv2.api   File 
/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py, line 185, in 
_handle_fault_response
 TRACE nova.network.neutronv2.api exception_handler_v20(status_code, 
des_error_body)
 TRACE nova.network.neutronv2.api   File 
/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py, line 83, in 
exception_handler_v20
TRACE nova.network.neutronv2.api message=message)
 nova.network.neutronv2.api NeutronClientException: 404 Not Found

The bind failed because of the following error:
Network a813e9e3-4e87-4de6-8f48-84e4a4cb774a is of type vxlan but agent or 
mechanism driver only support ['local', 'flat', 'vlan']. 
check_segment_for_agent 
/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/mech_agent.py:193
which is clear and expected.

Post this, I wanted to clean up the instances and it just won't get deleted.
Even though the delete request comes back with Request to delete server has 
been accepted

Upon pdbing, I could see that there's an error being thrown at the
nova/api/openstack/wsgi.py around line 1061 -

'Controller' object has no attribute 'versioned_methods'

I think this is a different bug than the ones which have been earlier
reported.

Bug 1329559 being one for reference.

** Affects: nova
 Importance: Undecided
 Assignee: Sudipta Biswas (sbiswas7)
 Status: New


** Tags: network

** Changed in: nova
 Assignee: (unassigned) = Sudipta Biswas (sbiswas7)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1458809

Title:
  Unable to delete instances created using stale networks

Status in OpenStack Compute (Nova):
  New

Bug description:
  I am on Kilo.

  I was using VxLAN based networks. 
  As the lab requirement changed, I had to move over to FLAT networking.
  This involved editing the ml2_conf.ini file and the necessary changes for 
'flat' networking to work.
  However, I didn't enable VxLAN networking any longer - even though the 
networks pre-created (using VxLAN) were still lying there. (This wasn't 
intentional).

  Again without actual intentions, I ended up deploying an instance with the 
VxLAN based networks.
  This results into a build failure on the compute node with the following 
exception:

  Unable to clear device ID for port 'None'
   TRACE nova.network.neutronv2.api Traceback (most recent call last):
   TRACE nova.network.neutronv2.api   File 
/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py, line 366, in 
_unbind_ports
   TRACE nova.network.neutronv2.api port_client.update_port(port_id, 
port_req_body)
   TRACE nova.network.neutronv2.api   File 
/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py, line 102, in 
with_params
   TRACE nova.network.neutronv2.api ret = self.function(instance, *args, 
**kwargs)
   TRACE nova.network.neutronv2.api   File 

[Yahoo-eng-team] [Bug 1458803] [NEW] Allocation pool not updated upon subnet edit from Horizon

2015-05-26 Thread Sudipta Biswas
Public bug reported:

Using Kilo.
When I edit the Allocation pool for a given subnet by Editing the subnet. 
The Save operation succeeds but the Allocation Pool values aren't updated 
and the older values are still displayed (used).
Tried this from the neutron cli and the neutron subnet-update --allocation-pool 
start=,end= -- works fine. Hence assuming this is a Horizon bug.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: neutron

** Tags added: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1458803

Title:
  Allocation pool not updated upon subnet edit from Horizon

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Using Kilo.
  When I edit the Allocation pool for a given subnet by Editing the subnet. 
  The Save operation succeeds but the Allocation Pool values aren't updated 
and the older values are still displayed (used).
  Tried this from the neutron cli and the neutron subnet-update 
--allocation-pool start=,end= -- works fine. Hence assuming this is a 
Horizon bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1458803/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1428424] Re: Remove use of contextlib.nested

2015-05-26 Thread Ihar Hrachyshka
** Changed in: neutron
   Status: Fix Released = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1428424

Title:
  Remove use of contextlib.nested

Status in Cinder:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  The contextlib.nested call has been deprecated
  in Python 2.7. This causes DeprecationWarning
  messages in the unit tests.
  
  There are also known issues with contextlib.nested
  that were addressed by the native support for
  multiple with variables. For instance, if the
  first object is created but the second one throws
  an exception, the first object's __exit__ is never
  called. For more information see
  https://docs.python.org/2/library/contextlib.html#contextlib.nested
  contextlib.nested is also not compatible in Python 3.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1428424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458838] [NEW] [IPV6][LBaaS] Adding default gw to vip port and clearing router arp cache failing for IPv6

2015-05-26 Thread venkata anil
Public bug reported:

LBaaS adding default gw to vip port and clearing router arp cache failing for 
ipv6.
This code is 
 
https://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/drivers/haproxy/namespace_driver.py#L305
  
is IPv4 specific and need to be enhanced to support IPv6

Error log, when trying to create listener for IPv6 subnet


2015-05-26 10:57:13.235 DEBUG neutron.agent.linux.utils 
[req-2630492d-a319-44ed-94b2-71533bdd19c4 admin 
11babcdeb88542c38da7d02b34df3093] Running comm
and (rootwrap daemon): ['ip', 'netns', 'exec', 
'qlbaas-abeb6446-38db-4f9d-8645-b0e284175bab', 'route', 'add', 'default', 'gw', 
'4001::1'] from (pid=19
994) execute_rootwrap_daemon /opt/stack/neutron/neutron/agent/linux/utils.py:100
2015-05-26 10:57:13.303 ERROR neutron.agent.linux.utils 
[req-2630492d-a319-44ed-94b2-71533bdd19c4 admin 
11babcdeb88542c38da7d02b34df3093]
Command: ['ip', 'netns', 'exec', 
u'qlbaas-abeb6446-38db-4f9d-8645-b0e284175bab', 'route', 'add', 'default', 
'gw', u'4001::1']
Exit code: 6
Stdin:
Stdout:
Stderr: 4001::1: Unknown host

2015-05-26 10:57:13.303 DEBUG neutron.agent.linux.utils 
[req-2630492d-a319-44ed-94b2-71533bdd19c4 admin 
11babcdeb88542c38da7d02b34df3093] Running command (rootwrap daemon): ['ip', 
'netns', 'exec', 'qlbaas-abeb6446-38db-4f9d-8645-b0e284175bab', 'arping', '-U', 
'-I', 'tap36377f20-9a', '-c', '3', '4001::f816:3eff:feed:1e66'] from 
(pid=19994) execute_rootwrap_daemon 
/opt/stack/neutron/neutron/agent/linux/utils.py:100
2015-05-26 10:57:13.382 ERROR neutron.agent.linux.utils 
[req-2630492d-a319-44ed-94b2-71533bdd19c4 admin 
11babcdeb88542c38da7d02b34df3093]
Command: ['ip', 'netns', 'exec', 
u'qlbaas-abeb6446-38db-4f9d-8645-b0e284175bab', 'arping', '-U', '-I', 
u'tap36377f20-9a', '-c', 3, u'4001::f816:3eff:feed:1e66']
Exit code: 2
Stdin:
Stdout:
Stderr: arping: unknown host 4001::f816:3eff:feed:1e66

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New


** Tags: ipv6 lb lbaas

** Changed in: neutron
 Assignee: (unassigned) = venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458838

Title:
  [IPV6][LBaaS] Adding default gw to vip port and clearing router arp
  cache failing for IPv6

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  LBaaS adding default gw to vip port and clearing router arp cache failing for 
ipv6.
  This code is 
   
https://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/drivers/haproxy/namespace_driver.py#L305
  
  is IPv4 specific and need to be enhanced to support IPv6

  Error log, when trying to create listener for IPv6 subnet

  
  2015-05-26 10:57:13.235 DEBUG neutron.agent.linux.utils 
[req-2630492d-a319-44ed-94b2-71533bdd19c4 admin 
11babcdeb88542c38da7d02b34df3093] Running comm
  and (rootwrap daemon): ['ip', 'netns', 'exec', 
'qlbaas-abeb6446-38db-4f9d-8645-b0e284175bab', 'route', 'add', 'default', 'gw', 
'4001::1'] from (pid=19
  994) execute_rootwrap_daemon 
/opt/stack/neutron/neutron/agent/linux/utils.py:100
  2015-05-26 10:57:13.303 ERROR neutron.agent.linux.utils 
[req-2630492d-a319-44ed-94b2-71533bdd19c4 admin 
11babcdeb88542c38da7d02b34df3093]
  Command: ['ip', 'netns', 'exec', 
u'qlbaas-abeb6446-38db-4f9d-8645-b0e284175bab', 'route', 'add', 'default', 
'gw', u'4001::1']
  Exit code: 6
  Stdin:
  Stdout:
  Stderr: 4001::1: Unknown host

  2015-05-26 10:57:13.303 DEBUG neutron.agent.linux.utils 
[req-2630492d-a319-44ed-94b2-71533bdd19c4 admin 
11babcdeb88542c38da7d02b34df3093] Running command (rootwrap daemon): ['ip', 
'netns', 'exec', 'qlbaas-abeb6446-38db-4f9d-8645-b0e284175bab', 'arping', '-U', 
'-I', 'tap36377f20-9a', '-c', '3', '4001::f816:3eff:feed:1e66'] from 
(pid=19994) execute_rootwrap_daemon 
/opt/stack/neutron/neutron/agent/linux/utils.py:100
  2015-05-26 10:57:13.382 ERROR neutron.agent.linux.utils 
[req-2630492d-a319-44ed-94b2-71533bdd19c4 admin 
11babcdeb88542c38da7d02b34df3093]
  Command: ['ip', 'netns', 'exec', 
u'qlbaas-abeb6446-38db-4f9d-8645-b0e284175bab', 'arping', '-U', '-I', 
u'tap36377f20-9a', '-c', 3, u'4001::f816:3eff:feed:1e66']
  Exit code: 2
  Stdin:
  Stdout:
  Stderr: arping: unknown host 4001::f816:3eff:feed:1e66

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1458838/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458861] [NEW] Unable to retrieve instances after changing to multi-domain setup

2015-05-26 Thread Marcel Jordan
Public bug reported:

After I changed keystone to multi-domain driver, I get on the horizon
dashboard following error message when i want to display instances:
Error: Unauthorized:. Unable to retrieve instances

Name  : openstack-nova-api
Arch: noarch
Version   : 2014.2.2

/var/log/nova/nova.log
2015-05-26 14:09:44.512 2175 WARNING keystonemiddleware.auth_token [-] Identity 
response: {error: {message: Non-default domain is not supported (Disable 
debug mode to suppress these details.), code: 401, title: Unauthorized}}
2015-05-26 14:09:44.512 2175 WARNING keystonemiddleware.auth_token [-] 
Authorization failed for token
2015-05-26 14:09:44.513 2175 INFO nova.osapi_compute.wsgi.server [-] 10.0.0.10 
GET 
/v2/1d524a0433474fa48eb376d913a80fc1/servers/detail?limit=21project_id=1d524a0433474fa48eb376d913a80fc1
 HTTP/1.1 status: 401 len: 258 time: 0.4322391
2015-05-26 14:09:44.518 2175 WARNING keystonemiddleware.auth_token [-] Unable 
to find authentication token in headers

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1458861

Title:
  Unable to retrieve instances after changing to multi-domain setup

Status in OpenStack Compute (Nova):
  New

Bug description:
  After I changed keystone to multi-domain driver, I get on the horizon
  dashboard following error message when i want to display instances:
  Error: Unauthorized:. Unable to retrieve instances

  Name  : openstack-nova-api
  Arch: noarch
  Version   : 2014.2.2

  /var/log/nova/nova.log
  2015-05-26 14:09:44.512 2175 WARNING keystonemiddleware.auth_token [-] 
Identity response: {error: {message: Non-default domain is not supported 
(Disable debug mode to suppress these details.), code: 401, title: 
Unauthorized}}
  2015-05-26 14:09:44.512 2175 WARNING keystonemiddleware.auth_token [-] 
Authorization failed for token
  2015-05-26 14:09:44.513 2175 INFO nova.osapi_compute.wsgi.server [-] 
10.0.0.10 GET 
/v2/1d524a0433474fa48eb376d913a80fc1/servers/detail?limit=21project_id=1d524a0433474fa48eb376d913a80fc1
 HTTP/1.1 status: 401 len: 258 time: 0.4322391
  2015-05-26 14:09:44.518 2175 WARNING keystonemiddleware.auth_token [-] Unable 
to find authentication token in headers

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1458861/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458945] Re: Use graduated oslo.policy instead of oslo-incubator code

2015-05-26 Thread Samuel de Medeiros Queiroz
** Also affects: swift
   Importance: Undecided
   Status: New

** Also affects: ceilometer
   Importance: Undecided
   Status: New

** Also affects: trove
   Importance: Undecided
   Status: New

** Also affects: ironic
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458945

Title:
  Use graduated oslo.policy instead of oslo-incubator code

Status in OpenStack Telemetry (Ceilometer):
  New
Status in Cinder:
  New
Status in Orchestration API (Heat):
  New
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New
Status in OpenStack Object Storage (Swift):
  New
Status in Openstack Database (Trove):
  New

Bug description:
  The Policy code is now be managed as a library, named oslo.policy.

  If there is a CVE level defect, deploying a fix should require
  deploying a new version of the library, not syncing each individual
  project.

  All the projects in the OpenStack ecosystem that are using the policy
  code from oslo-incubator should use the new library.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1458945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458945] Re: Use graduated oslo.policy instead of oslo-incubator code

2015-05-26 Thread Ruby Loo
** No longer affects: ironic

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1458945

Title:
  Use graduated oslo.policy instead of oslo-incubator code

Status in OpenStack Key Management (Barbican):
  New
Status in OpenStack Telemetry (Ceilometer):
  New
Status in Cinder:
  New
Status in OpenStack Congress:
  New
Status in Designate:
  New
Status in Orchestration API (Heat):
  New
Status in OpenStack Dashboard (Horizon):
  New
Status in MagnetoDB - key-value storage service for OpenStack:
  New
Status in Magnum - Containers for OpenStack:
  New
Status in Manila:
  New
Status in Mistral:
  New
Status in Murano:
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New
Status in Rally:
  New
Status in OpenStack Data Processing (Sahara):
  New
Status in Openstack Database (Trove):
  New

Bug description:
  The Policy code is now be managed as a library, named oslo.policy.

  If there is a CVE level defect, deploying a fix should require
  deploying a new version of the library, not syncing each individual
  project.

  All the projects in the OpenStack ecosystem that are using the policy
  code from oslo-incubator should use the new library.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1458945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458958] [NEW] Exceptions from Cinder detach volume API not handled

2015-05-26 Thread Anthony Lee
Public bug reported:

There is currently a Cinder spec in-progress that proposes the removal
of file locks that are present during volume attach / detach and a few
other locations. Nova does not appear to be handling exceptions thrown
during the volume detach process in a way that notifies the user why the
detach failed.

Example of what happens when an exception is thrown during a detach by
Cinder's API:

http://paste.openstack.org/show/KBpPWxfVMmQ5GmLeAFpG/

Related Cinder WIP spec giving an overview of why Cinder API might throw
exceptions now:

https://review.openstack.org/#/c/149894/

Related Cinder WIP patch showing potential changes to Cinder API:

https://review.openstack.org/#/c/153748/

When a volume is in an 'ING' state, Cinder API calls that interact with
that volume will return an exception notifying a caller that that volume
is busy. There may be other calls to the Cinder API (that deal with
volumes) that Nova does not handle an exception from. Exceptions from
those calls will need to be handled, too.

In order to reproduce the above exception:

Add 'raise exception.VolumeIsBusy(message=sample)'  to the top of the 
begin_detaching function in cinder/api.py.
restart c-api.
After attaching a volume in OpenStack, attempt to detach it. The above 
exception will occur in n-api.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1458958

Title:
  Exceptions from Cinder detach volume API not handled

Status in OpenStack Compute (Nova):
  New

Bug description:
  There is currently a Cinder spec in-progress that proposes the removal
  of file locks that are present during volume attach / detach and a few
  other locations. Nova does not appear to be handling exceptions thrown
  during the volume detach process in a way that notifies the user why
  the detach failed.

  Example of what happens when an exception is thrown during a detach by
  Cinder's API:

  http://paste.openstack.org/show/KBpPWxfVMmQ5GmLeAFpG/

  Related Cinder WIP spec giving an overview of why Cinder API might
  throw exceptions now:

  https://review.openstack.org/#/c/149894/

  Related Cinder WIP patch showing potential changes to Cinder API:

  https://review.openstack.org/#/c/153748/

  When a volume is in an 'ING' state, Cinder API calls that interact
  with that volume will return an exception notifying a caller that that
  volume is busy. There may be other calls to the Cinder API (that deal
  with volumes) that Nova does not handle an exception from. Exceptions
  from those calls will need to be handled, too.

  In order to reproduce the above exception:

  Add 'raise exception.VolumeIsBusy(message=sample)'  to the top of the 
begin_detaching function in cinder/api.py.
  restart c-api.
  After attaching a volume in OpenStack, attempt to detach it. The above 
exception will occur in n-api.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1458958/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458945] Re: Use graduated oslo.policy instead of oslo-incubator code

2015-05-26 Thread Samuel Merritt
** No longer affects: swift

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1458945

Title:
  Use graduated oslo.policy instead of oslo-incubator code

Status in OpenStack Key Management (Barbican):
  New
Status in OpenStack Telemetry (Ceilometer):
  New
Status in Cinder:
  New
Status in OpenStack Congress:
  New
Status in Designate:
  New
Status in Orchestration API (Heat):
  New
Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  New
Status in MagnetoDB - key-value storage service for OpenStack:
  New
Status in Magnum - Containers for OpenStack:
  New
Status in Manila:
  New
Status in Mistral:
  New
Status in Murano:
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New
Status in Rally:
  New
Status in OpenStack Data Processing (Sahara):
  New
Status in Openstack Database (Trove):
  New

Bug description:
  The Policy code is now be managed as a library, named oslo.policy.

  If there is a CVE level defect, deploying a fix should require
  deploying a new version of the library, not syncing each individual
  project.

  All the projects in the OpenStack ecosystem that are using the policy
  code from oslo-incubator should use the new library.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1458945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458945] Re: Use graduated oslo.policy instead of oslo-incubator code

2015-05-26 Thread Samuel de Medeiros Queiroz
** Also affects: sahara
   Importance: Undecided
   Status: New

** Also affects: barbican
   Importance: Undecided
   Status: New

** Also affects: designate
   Importance: Undecided
   Status: New

** Also affects: magnum
   Importance: Undecided
   Status: New

** Also affects: manila
   Importance: Undecided
   Status: New

** Also affects: murano
   Importance: Undecided
   Status: New

** Also affects: congress
   Importance: Undecided
   Status: New

** Also affects: rally
   Importance: Undecided
   Status: New

** Also affects: magnetodb
   Importance: Undecided
   Status: New

** Also affects: horizon
   Importance: Undecided
   Status: New

** Also affects: mistral
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458945

Title:
  Use graduated oslo.policy instead of oslo-incubator code

Status in OpenStack Key Management (Barbican):
  New
Status in OpenStack Telemetry (Ceilometer):
  New
Status in Cinder:
  New
Status in OpenStack Congress:
  New
Status in Designate:
  New
Status in Orchestration API (Heat):
  New
Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  New
Status in MagnetoDB - key-value storage service for OpenStack:
  New
Status in Magnum - Containers for OpenStack:
  New
Status in Manila:
  New
Status in Mistral:
  New
Status in Murano:
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New
Status in Rally:
  New
Status in OpenStack Data Processing (Sahara):
  New
Status in OpenStack Object Storage (Swift):
  New
Status in Openstack Database (Trove):
  New

Bug description:
  The Policy code is now be managed as a library, named oslo.policy.

  If there is a CVE level defect, deploying a fix should require
  deploying a new version of the library, not syncing each individual
  project.

  All the projects in the OpenStack ecosystem that are using the policy
  code from oslo-incubator should use the new library.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1458945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458968] [NEW] stable/juno unit tests blocked: ContextualVersionConflict: (oslo.i18n 1.3.1 (/home/jenkins/workspace/periodic-glance-python27-juno/.tox/py27/lib/python2.7/site-pac

2015-05-26 Thread Adam Gandelman
Public bug reported:

stable/juno unit tests are failing on (multiple) dependency conflicts.
Reproducible outside the gate simply running the py27 or py26 tox env
locally:

Tests in  glance.tests.unit.test_opts fail with:

ContextualVersionConflict: (oslo.i18n 1.3.1 (/home/jenkins/workspace
/periodic-glance-python27-juno/.tox/py27/lib/python2.7/site-packages),
Requirement.parse('oslo.i18n=1.5.0'), set(['oslo.utils', 'pycadf']))


This isn't affecting stable/juno tempest runs of this stuff since devstack sets 
up libraries directly from tip of the stable branches, where requirements have 
been updated to avoid this.  Those fixes haven't been pushed out via releases 
to pypi, which is what the unit tests rely on.

There are two paths of conflict

glance (stable/juno) (keystonemiddleware=1.0.0,1.4.0)
  - keystonemiddleware (1.3.1) (pycadf=0.6.0)
  - pycadf (0.9.0)
  - CONFLICT oslo.config=1.9.3  # Apache-2.0
  - CONFLICT oslo.i18n=1.5.0  # Apache-2.0

As per GR, we should be getting pycadf=0.6.0,!=0.6.2,0.7.0, but 
keystomemiddleware's uncapped dep is pulling in the newer.
https://review.openstack.org/#/c/173123/ resolves the issue by adding the 
proper stable/juno caps to keystonemiddleware stable/juno, but it looks like 
those changes need to be released as keystonemiddlware 1.3.2

** Affects: glance
 Importance: Undecided
 Status: New

** Affects: keystonemiddleware
 Importance: Undecided
 Status: New

** Also affects: keystonemiddleware
   Importance: Undecided
   Status: New

** Summary changed:

- stable/juno unit tests wedged: ContextualVersionConflict: (oslo.i18n 1.3.1 
(/home/jenkins/workspace/periodic-glance-python27-juno/.tox/py27/lib/python2.7/site-packages),
 Requirement.parse('oslo.i18n=1.5.0'), set(['oslo.utils', 'pycadf']))
+ stable/juno unit tests blocked: ContextualVersionConflict: (oslo.i18n 1.3.1 
(/home/jenkins/workspace/periodic-glance-python27-juno/.tox/py27/lib/python2.7/site-packages),
 Requirement.parse('oslo.i18n=1.5.0'), set(['oslo.utils', 'pycadf']))

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1458968

Title:
  stable/juno unit tests blocked: ContextualVersionConflict: (oslo.i18n
  1.3.1 (/home/jenkins/workspace/periodic-glance-
  python27-juno/.tox/py27/lib/python2.7/site-packages),
  Requirement.parse('oslo.i18n=1.5.0'), set(['oslo.utils', 'pycadf']))

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Identity  (Keystone) Middleware:
  New

Bug description:
  stable/juno unit tests are failing on (multiple) dependency conflicts.
  Reproducible outside the gate simply running the py27 or py26 tox env
  locally:

  Tests in  glance.tests.unit.test_opts fail with:

  ContextualVersionConflict: (oslo.i18n 1.3.1 (/home/jenkins/workspace
  /periodic-glance-python27-juno/.tox/py27/lib/python2.7/site-packages),
  Requirement.parse('oslo.i18n=1.5.0'), set(['oslo.utils', 'pycadf']))

  
  This isn't affecting stable/juno tempest runs of this stuff since devstack 
sets up libraries directly from tip of the stable branches, where requirements 
have been updated to avoid this.  Those fixes haven't been pushed out via 
releases to pypi, which is what the unit tests rely on.

  There are two paths of conflict

  glance (stable/juno) (keystonemiddleware=1.0.0,1.4.0)
- keystonemiddleware (1.3.1) (pycadf=0.6.0)
- pycadf (0.9.0)
- CONFLICT oslo.config=1.9.3  # Apache-2.0
- CONFLICT oslo.i18n=1.5.0  # Apache-2.0

  As per GR, we should be getting pycadf=0.6.0,!=0.6.2,0.7.0, but 
keystomemiddleware's uncapped dep is pulling in the newer.
  https://review.openstack.org/#/c/173123/ resolves the issue by adding the 
proper stable/juno caps to keystonemiddleware stable/juno, but it looks like 
those changes need to be released as keystonemiddlware 1.3.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1458968/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458928] [NEW] jshint failing on angular js in stable/kilo

2015-05-26 Thread Matt Riedemann
Public bug reported:

http://logs.openstack.org/56/183656/1/check/gate-horizon-
jshint/cd75430/console.html.gz#_2015-05-15_19_27_08_073

Looks like this started after 5/14, since there was a passing job before
that:

http://logs.openstack.org/21/183321/1/check/gate-horizon-
jshint/90ca4dd/console.html.gz#_2015-05-14_22_50_02_203

The only difference I see in external libraries used is tox went from
1.9.2 (passing) to tox 2.0.1 (failing).  So I'm thinking there is
something with how the environment is defined for the jshint runs
because it appears that .jshintrc isn't getting used, see the workaround
fix here:

https://review.openstack.org/#/c/185172/

From the tox 2.0 changelog:

https://testrun.org/tox/latest/changelog.html

(new) introduce environment variable isolation: tox now only passes the
PATH and PIP_INDEX_URL variable from the tox invocation environment to
the test environment and on Windows also SYSTEMROOT, PATHEXT, TEMP and
TMP whereas on unix additionally TMPDIR is passed. If you need to pass
through further environment variables you can use the new passenv
setting, a space-separated list of environment variable names. Each name
can make use of fnmatch-style glob patterns. All environment variables
which exist in the tox-invocation environment will be copied to the test
environment.

** Affects: horizon
 Importance: Critical
 Status: Confirmed

** Changed in: horizon
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1458928

Title:
  jshint failing on angular js in stable/kilo

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  http://logs.openstack.org/56/183656/1/check/gate-horizon-
  jshint/cd75430/console.html.gz#_2015-05-15_19_27_08_073

  Looks like this started after 5/14, since there was a passing job
  before that:

  http://logs.openstack.org/21/183321/1/check/gate-horizon-
  jshint/90ca4dd/console.html.gz#_2015-05-14_22_50_02_203

  The only difference I see in external libraries used is tox went from
  1.9.2 (passing) to tox 2.0.1 (failing).  So I'm thinking there is
  something with how the environment is defined for the jshint runs
  because it appears that .jshintrc isn't getting used, see the
  workaround fix here:

  https://review.openstack.org/#/c/185172/

  From the tox 2.0 changelog:

  https://testrun.org/tox/latest/changelog.html

  (new) introduce environment variable isolation: tox now only passes
  the PATH and PIP_INDEX_URL variable from the tox invocation
  environment to the test environment and on Windows also SYSTEMROOT,
  PATHEXT, TEMP and TMP whereas on unix additionally TMPDIR is passed.
  If you need to pass through further environment variables you can use
  the new passenv setting, a space-separated list of environment
  variable names. Each name can make use of fnmatch-style glob patterns.
  All environment variables which exist in the tox-invocation
  environment will be copied to the test environment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1458928/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458994] [NEW] When logged in as a pure domain admin, cannot list users in a group

2015-05-26 Thread Michael Hagedorn
Public bug reported:

When using domain scoped tokens, and trying to add users to a group , keystone 
throws the error {u'error': {u'code': 403,
  u'message': u'You are not authorized to perform the requested action: 
identity:list_users_in_group (Disable debug mode to suppress these details.)',
  u'title': u'Forbidden'}}.

To reproduce this bug you may use the following code:


import requests
import json


def get_unscoped_token(username,password,domain):
headers = {'Content-Type': 'application/json'}
payload = {'auth': {'identity': {'password': {'user': {'domain': {'name': 
domain}, 'password': password, 'name': username}}, 'methods': ['password']}}}
r = requests.post(OS_AUTH_URL, data=json.dumps(payload), headers=headers)
return r.headers['X-Subject-Token']

def get_token_scoped_to_domain(unscoped_token,domain):
headers = {'Content-Type': 'application/json'}
payload ={auth: {scope: {domain: {name: domain}}, identity: 
{token: {id:unscoped_token}, methods: [token]}}}
r = requests.post(OS_AUTH_URL, data=json.dumps(payload), headers=headers)
return r.headers['X-Subject-Token']

def get_token_scoped_to_project(unscoped_token,project):
headers = {'Content-Type': 'application/json'}
payload ={auth: {scope: {project: {name: project}}, identity: 
{token: {id:unscoped_token}, methods: [token]}}}
r = requests.post(OS_AUTH_URL, data=json.dumps(payload), headers=headers)
return r.headers['X-Subject-Token']

def list_domains(token):
headers = {'Content-Type': 'application/json',
   'Accept': 'application/json',
   'X-Auth-Token': token}
r = requests.get(http://192.168.27.100:35357/v3/domains;, headers=headers)
return r.json()[domains]


def list_groups_for_domain(domain_id, token):
headers = {'Content-Type': 'application/json',
   'X-Auth-Token': token}
r = requests.get(http://192.168.27.100:5000/v3/groups?domain_id=%s; % 
domain_id , headers=headers)
return r.json()[groups]

def get_domain_named(domain_name,token):
domains = list_domains(domain_token)
domain = next(x for x in domains if x.get(name) == domain_name)
return domain

def get_group_named_in_domain(group_name, domain_id,token):
groups = list_groups_for_domain(domain_id,token)
group = next(x for x in groups if x.get(name) == group_name)
return group

def get_users_in_group_in_domain(group_id, domain_id, token):
headers = {'Content-Type': 'application/json',
   'Accept': 'application/json',
   'X-Auth-Token': token}
r = 
requests.get(http://192.168.27.100:35357/v3/groups/%s/users?domain_id=%s; % 
(group_id,domain_id), headers=headers)
return r.json()




unscoped_token  = get_unscoped_token(OS_USERNAME,OS_PASSWORD,default)
domain_token = get_token_scoped_to_domain(unscoped_token,default)
nintendo_domain = get_domain_named(nintendo, domain_token)

#nintendo domain operations
unscoped_token  = get_unscoped_token(mario,pass,nintendo)
domain_token = get_token_scoped_to_domain(unscoped_token,nintendo)

list_groups_for_domain(nintendo_domain.get(id), domain_token)

list_groups_for_domain(nintendo_domain.get(id), domain_token)

mygroup = get_group_named_in_domain(mygroup,nintendo_domain.get(id),
domain_token )

get_users_in_group_in_domain(mygroup.get(id),
nintendo_domain.get(id), domain_token)

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1458994

Title:
  When logged in as a pure domain admin, cannot list users in a group

Status in OpenStack Identity (Keystone):
  New

Bug description:
  When using domain scoped tokens, and trying to add users to a group , 
keystone throws the error {u'error': {u'code': 403,
u'message': u'You are not authorized to perform the requested action: 
identity:list_users_in_group (Disable debug mode to suppress these details.)',
u'title': u'Forbidden'}}.

  To reproduce this bug you may use the following code:

  
  import requests
  import json


  
  def get_unscoped_token(username,password,domain):
  headers = {'Content-Type': 'application/json'}
  payload = {'auth': {'identity': {'password': {'user': {'domain': {'name': 
domain}, 'password': password, 'name': username}}, 'methods': ['password']}}}
  r = requests.post(OS_AUTH_URL, data=json.dumps(payload), headers=headers)
  return r.headers['X-Subject-Token']

  def get_token_scoped_to_domain(unscoped_token,domain):
  headers = {'Content-Type': 'application/json'}
  payload ={auth: {scope: {domain: {name: domain}}, identity: 
{token: {id:unscoped_token}, methods: [token]}}}
  r = requests.post(OS_AUTH_URL, data=json.dumps(payload), headers=headers)
  return r.headers['X-Subject-Token']

  def get_token_scoped_to_project(unscoped_token,project):
  headers = {'Content-Type': 

[Yahoo-eng-team] [Bug 1458945] Re: Use graduated oslo.policy instead of oslo-incubator code

2015-05-26 Thread Thomas Herve
** No longer affects: heat

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1458945

Title:
  Use graduated oslo.policy instead of oslo-incubator code

Status in OpenStack Key Management (Barbican):
  New
Status in OpenStack Telemetry (Ceilometer):
  New
Status in Cinder:
  New
Status in OpenStack Congress:
  New
Status in Designate:
  New
Status in OpenStack Dashboard (Horizon):
  New
Status in MagnetoDB - key-value storage service for OpenStack:
  New
Status in Magnum - Containers for OpenStack:
  New
Status in Manila:
  New
Status in Mistral:
  New
Status in Murano:
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New
Status in Rally:
  Invalid
Status in OpenStack Data Processing (Sahara):
  New
Status in Openstack Database (Trove):
  New

Bug description:
  The Policy code is now be managed as a library, named oslo.policy.

  If there is a CVE level defect, deploying a fix should require
  deploying a new version of the library, not syncing each individual
  project.

  All the projects in the OpenStack ecosystem that are using the policy
  code from oslo-incubator should use the new library.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1458945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458993] [NEW] Encase helper-functions.spec.js in IIEF

2015-05-26 Thread Thai Tran
Public bug reported:

https://review.openstack.org/#/c/185140/6/horizon/static/framework/util/tech-debt/helper-functions.spec.js
Immediately Invoked Function Expression (IIFE).
Needs to be enclosed and jshint globals removed.

** Affects: horizon
 Importance: Low
 Status: New


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1458993

Title:
  Encase helper-functions.spec.js in IIEF

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  
https://review.openstack.org/#/c/185140/6/horizon/static/framework/util/tech-debt/helper-functions.spec.js
  Immediately Invoked Function Expression (IIFE).
  Needs to be enclosed and jshint globals removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1458993/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459021] [NEW] nova vmware unit tests failing with oslo.vmware 0.13.0

2015-05-26 Thread Matt Riedemann
Public bug reported:

http://logs.openstack.org/68/184968/2/check/gate-nova-
python27/e3dadf7/console.html#_2015-05-26_20_45_35_734

2015-05-26 20:45:35.734 | {4} 
nova.tests.unit.virt.vmwareapi.test_vm_util.VMwareVMUtilTestCase.test_create_vm_invalid_guestid
 [0.058940s] ... FAILED
2015-05-26 20:45:35.735 | 
2015-05-26 20:45:35.735 | Captured traceback:
2015-05-26 20:45:35.735 | ~~~
2015-05-26 20:45:35.736 | Traceback (most recent call last):
2015-05-26 20:45:35.736 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py,
 line 1201, in patched
2015-05-26 20:45:35.736 | return func(*args, **keywargs)
2015-05-26 20:45:35.737 |   File 
nova/tests/unit/virt/vmwareapi/test_vm_util.py, line 796, in 
test_create_vm_invalid_guestid
2015-05-26 20:45:35.737 | 'folder', config_spec, 'res-pool')
2015-05-26 20:45:35.737 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 422, in assertRaises
2015-05-26 20:45:35.738 | self.assertThat(our_callable, matcher)
2015-05-26 20:45:35.738 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 433, in assertThat
2015-05-26 20:45:35.738 | mismatch_error = self._matchHelper(matchee, 
matcher, message, verbose)
2015-05-26 20:45:35.738 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 483, in _matchHelper
2015-05-26 20:45:35.739 | mismatch = matcher.match(matchee)
2015-05-26 20:45:35.739 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py,
 line 108, in match
2015-05-26 20:45:35.739 | mismatch = 
self.exception_matcher.match(exc_info)
2015-05-26 20:45:35.740 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_higherorder.py,
 line 62, in match
2015-05-26 20:45:35.740 | mismatch = matcher.match(matchee)
2015-05-26 20:45:35.740 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 414, in match
2015-05-26 20:45:35.741 | reraise(*matchee)
2015-05-26 20:45:35.741 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py,
 line 101, in match
2015-05-26 20:45:35.741 | result = matchee()
2015-05-26 20:45:35.742 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 969, in __call__
2015-05-26 20:45:35.742 | return self._callable_object(*self._args, 
**self._kwargs)
2015-05-26 20:45:35.742 |   File nova/virt/vmwareapi/vm_util.py, line 
1280, in create_vm
2015-05-26 20:45:35.742 | task_info = 
session._wait_for_task(vm_create_task)
2015-05-26 20:45:35.743 |   File nova/virt/vmwareapi/driver.py, line 714, 
in _wait_for_task
2015-05-26 20:45:35.743 | return self.wait_for_task(task_ref)
2015-05-26 20:45:35.743 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_vmware/api.py,
 line 381, in wait_for_task
2015-05-26 20:45:35.744 | return evt.wait()
2015-05-26 20:45:35.744 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/event.py,
 line 121, in wait
2015-05-26 20:45:35.744 | return hubs.get_hub().switch()
2015-05-26 20:45:35.745 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/hub.py,
 line 294, in switch
2015-05-26 20:45:35.745 | return self.greenlet.switch()
2015-05-26 20:45:35.745 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_vmware/common/loopingcall.py,
 line 76, in _inner
2015-05-26 20:45:35.745 | self.f(*self.args, **self.kw)
2015-05-26 20:45:35.746 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_vmware/api.py,
 line 423, in _poll_task
2015-05-26 20:45:35.746 | raise task_ex
2015-05-26 20:45:35.746 | oslo_vmware.exceptions.VimFaultException: Error 
message
2015-05-26 20:45:35.747 | Faults: ['VMwareDriverException']
2015-05-26 20:45:35.747 | 
2015-05-26 20:45:35.747 | 
2015-05-26 20:45:35.748 | Captured pythonlogging:
2015-05-26 20:45:35.748 | ~~~
2015-05-26 20:45:35.748 | 2015-05-26 20:45:35,632 INFO [oslo_vmware.api] 
Successfully established new session; session ID is ae214.
2015-05-26 20:45:35.748 | 2015-05-26 20:45:35,633 ERROR 
[oslo_vmware.common.loopingcall] in fixed duration looping call
2015-05-26 20:45:35.749 | Traceback (most 

[Yahoo-eng-team] [Bug 1459021] Re: nova vmware unit tests failing with oslo.vmware 0.13.0

2015-05-26 Thread Matt Riedemann
I'm guessing this is the change that's breaking the tests:

https://review.openstack.org/#/c/176694/

** Also affects: oslo.vmware
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1459021

Title:
  nova vmware unit tests failing with oslo.vmware 0.13.0

Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo VMware library for OpenStack projects:
  In Progress

Bug description:
  http://logs.openstack.org/68/184968/2/check/gate-nova-
  python27/e3dadf7/console.html#_2015-05-26_20_45_35_734

  2015-05-26 20:45:35.734 | {4} 
nova.tests.unit.virt.vmwareapi.test_vm_util.VMwareVMUtilTestCase.test_create_vm_invalid_guestid
 [0.058940s] ... FAILED
  2015-05-26 20:45:35.735 | 
  2015-05-26 20:45:35.735 | Captured traceback:
  2015-05-26 20:45:35.735 | ~~~
  2015-05-26 20:45:35.736 | Traceback (most recent call last):
  2015-05-26 20:45:35.736 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py,
 line 1201, in patched
  2015-05-26 20:45:35.736 | return func(*args, **keywargs)
  2015-05-26 20:45:35.737 |   File 
nova/tests/unit/virt/vmwareapi/test_vm_util.py, line 796, in 
test_create_vm_invalid_guestid
  2015-05-26 20:45:35.737 | 'folder', config_spec, 'res-pool')
  2015-05-26 20:45:35.737 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 422, in assertRaises
  2015-05-26 20:45:35.738 | self.assertThat(our_callable, matcher)
  2015-05-26 20:45:35.738 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 433, in assertThat
  2015-05-26 20:45:35.738 | mismatch_error = self._matchHelper(matchee, 
matcher, message, verbose)
  2015-05-26 20:45:35.738 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 483, in _matchHelper
  2015-05-26 20:45:35.739 | mismatch = matcher.match(matchee)
  2015-05-26 20:45:35.739 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py,
 line 108, in match
  2015-05-26 20:45:35.739 | mismatch = 
self.exception_matcher.match(exc_info)
  2015-05-26 20:45:35.740 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_higherorder.py,
 line 62, in match
  2015-05-26 20:45:35.740 | mismatch = matcher.match(matchee)
  2015-05-26 20:45:35.740 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 414, in match
  2015-05-26 20:45:35.741 | reraise(*matchee)
  2015-05-26 20:45:35.741 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py,
 line 101, in match
  2015-05-26 20:45:35.741 | result = matchee()
  2015-05-26 20:45:35.742 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 969, in __call__
  2015-05-26 20:45:35.742 | return self._callable_object(*self._args, 
**self._kwargs)
  2015-05-26 20:45:35.742 |   File nova/virt/vmwareapi/vm_util.py, line 
1280, in create_vm
  2015-05-26 20:45:35.742 | task_info = 
session._wait_for_task(vm_create_task)
  2015-05-26 20:45:35.743 |   File nova/virt/vmwareapi/driver.py, line 
714, in _wait_for_task
  2015-05-26 20:45:35.743 | return self.wait_for_task(task_ref)
  2015-05-26 20:45:35.743 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_vmware/api.py,
 line 381, in wait_for_task
  2015-05-26 20:45:35.744 | return evt.wait()
  2015-05-26 20:45:35.744 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/event.py,
 line 121, in wait
  2015-05-26 20:45:35.744 | return hubs.get_hub().switch()
  2015-05-26 20:45:35.745 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/hub.py,
 line 294, in switch
  2015-05-26 20:45:35.745 | return self.greenlet.switch()
  2015-05-26 20:45:35.745 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_vmware/common/loopingcall.py,
 line 76, in _inner
  2015-05-26 20:45:35.745 | self.f(*self.args, **self.kw)
  2015-05-26 20:45:35.746 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_vmware/api.py,
 line 423, in _poll_task
  2015-05-26 20:45:35.746 | raise task_ex

[Yahoo-eng-team] [Bug 1459021] Re: nova vmware unit tests failing with oslo.vmware 0.13.0

2015-05-26 Thread Matt Riedemann
This is a change proposed to global-requirements to blacklist the 0.13.0
version from nova:

https://review.openstack.org/#/c/185748/

** Changed in: nova
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1459021

Title:
  nova vmware unit tests failing with oslo.vmware 0.13.0

Status in OpenStack Compute (Nova):
  Invalid
Status in Oslo VMware library for OpenStack projects:
  In Progress

Bug description:
  http://logs.openstack.org/68/184968/2/check/gate-nova-
  python27/e3dadf7/console.html#_2015-05-26_20_45_35_734

  2015-05-26 20:45:35.734 | {4} 
nova.tests.unit.virt.vmwareapi.test_vm_util.VMwareVMUtilTestCase.test_create_vm_invalid_guestid
 [0.058940s] ... FAILED
  2015-05-26 20:45:35.735 | 
  2015-05-26 20:45:35.735 | Captured traceback:
  2015-05-26 20:45:35.735 | ~~~
  2015-05-26 20:45:35.736 | Traceback (most recent call last):
  2015-05-26 20:45:35.736 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py,
 line 1201, in patched
  2015-05-26 20:45:35.736 | return func(*args, **keywargs)
  2015-05-26 20:45:35.737 |   File 
nova/tests/unit/virt/vmwareapi/test_vm_util.py, line 796, in 
test_create_vm_invalid_guestid
  2015-05-26 20:45:35.737 | 'folder', config_spec, 'res-pool')
  2015-05-26 20:45:35.737 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 422, in assertRaises
  2015-05-26 20:45:35.738 | self.assertThat(our_callable, matcher)
  2015-05-26 20:45:35.738 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 433, in assertThat
  2015-05-26 20:45:35.738 | mismatch_error = self._matchHelper(matchee, 
matcher, message, verbose)
  2015-05-26 20:45:35.738 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 483, in _matchHelper
  2015-05-26 20:45:35.739 | mismatch = matcher.match(matchee)
  2015-05-26 20:45:35.739 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py,
 line 108, in match
  2015-05-26 20:45:35.739 | mismatch = 
self.exception_matcher.match(exc_info)
  2015-05-26 20:45:35.740 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_higherorder.py,
 line 62, in match
  2015-05-26 20:45:35.740 | mismatch = matcher.match(matchee)
  2015-05-26 20:45:35.740 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 414, in match
  2015-05-26 20:45:35.741 | reraise(*matchee)
  2015-05-26 20:45:35.741 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py,
 line 101, in match
  2015-05-26 20:45:35.741 | result = matchee()
  2015-05-26 20:45:35.742 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 969, in __call__
  2015-05-26 20:45:35.742 | return self._callable_object(*self._args, 
**self._kwargs)
  2015-05-26 20:45:35.742 |   File nova/virt/vmwareapi/vm_util.py, line 
1280, in create_vm
  2015-05-26 20:45:35.742 | task_info = 
session._wait_for_task(vm_create_task)
  2015-05-26 20:45:35.743 |   File nova/virt/vmwareapi/driver.py, line 
714, in _wait_for_task
  2015-05-26 20:45:35.743 | return self.wait_for_task(task_ref)
  2015-05-26 20:45:35.743 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_vmware/api.py,
 line 381, in wait_for_task
  2015-05-26 20:45:35.744 | return evt.wait()
  2015-05-26 20:45:35.744 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/event.py,
 line 121, in wait
  2015-05-26 20:45:35.744 | return hubs.get_hub().switch()
  2015-05-26 20:45:35.745 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/hub.py,
 line 294, in switch
  2015-05-26 20:45:35.745 | return self.greenlet.switch()
  2015-05-26 20:45:35.745 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_vmware/common/loopingcall.py,
 line 76, in _inner
  2015-05-26 20:45:35.745 | self.f(*self.args, **self.kw)
  2015-05-26 20:45:35.746 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_vmware/api.py,
 line 423, in _poll_task
  2015-05-26 20:45:35.746 | 

[Yahoo-eng-team] [Bug 1458890] [NEW] Add segment support to Neutron

2015-05-26 Thread Kyle Mestery
Public bug reported:

This is feedback from the Vancouver OpenStack Summit.

During the large deployment team (Go Daddy, Yahoo!, NeCTAR, CERN,
Rackspace, HP, BlueBox, among others) meeting, there was a discussion of
network architectures that we use to deliver Openstack.  As we talked it
became clear that there are a number of challenges around networking.

In many cases, our data center networks are architected with a
differientation between layer 2 and layer 3.  Said another way, there
are distinct network segments which are only available to a subset of
compute hosts.  These topolgies are typically necessary to manage
network resource capacity (IP addresses, broadcast domain size, ARP
tables, etc.)   Network topologies like these are not possible to
describe with Neutron constructs today.

The traditional solution to this is tunneling and overlay networks which
makes all networks available everywhere in the data center.  However,
overlay networks represent a large increase in complexity that can be
very difficult to troubleshoot.  For this reason, many large deployers
are not using overlay networks at all (or only for specific use cases
like private tenant networks.)

Beacuse Neutron does not have constructs that accurately describe our
network architectures, we'd like to see the notion of a network
segment in Neutron.  A segment could mean a L2 domain, IP block
boundary, or other partition.  Operators could use this new construct to
build accurate models of network topology within Neutron, making it much
more usable.

Example:  The typical use case is L2 segments that are restrained to
a single rack (or some subnet of compute hosts), but are still part of a
larger L3 network.  In this case, the overall Neutron network would
describe the L3 network, and the network segments would be used to
describe the L2 segments.


WIth the network segment construct (which are not intended to be exposed to end 
users ), there is also a need for some scheduling logic around placement and 
addressing of instances on an appropriate network segment based on availablity 
and capacity.  This also implies a means via API to report IP capacity of 
networks and segments, so we can filter out segments without capacity and the 
compute nodes that are tied to those segments.

Example:  The end user chooses the Neutron network for their
instance, which is actually comprised of several lower level network
segments within Neutron.  Scheduling must be done such that the network
segment chosen for the instance is available to the compute node on
which the instance is placed.  Additionally, the network segment that's
chosen must have available IP capacity in order for the instance to be
placed there.


Also, the scheduling for resize, migrate, ... should only consider the compute 
nodes allowed in the network segment where the VM is placed.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

** Tags added: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458890

Title:
  Add segment support to Neutron

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This is feedback from the Vancouver OpenStack Summit.

  During the large deployment team (Go Daddy, Yahoo!, NeCTAR, CERN,
  Rackspace, HP, BlueBox, among others) meeting, there was a discussion
  of network architectures that we use to deliver Openstack.  As we
  talked it became clear that there are a number of challenges around
  networking.

  In many cases, our data center networks are architected with a
  differientation between layer 2 and layer 3.  Said another way, there
  are distinct network segments which are only available to a subset
  of compute hosts.  These topolgies are typically necessary to manage
  network resource capacity (IP addresses, broadcast domain size, ARP
  tables, etc.)   Network topologies like these are not possible to
  describe with Neutron constructs today.

  The traditional solution to this is tunneling and overlay networks
  which makes all networks available everywhere in the data center.
  However, overlay networks represent a large increase in complexity
  that can be very difficult to troubleshoot.  For this reason, many
  large deployers are not using overlay networks at all (or only for
  specific use cases like private tenant networks.)

  Beacuse Neutron does not have constructs that accurately describe our
  network architectures, we'd like to see the notion of a network
  segment in Neutron.  A segment could mean a L2 domain, IP block
  boundary, or other partition.  Operators could use this new construct
  to build accurate models of network topology within Neutron, making it
  much more usable.

  Example:  The typical use case is L2 segments that are restrained
  to a single rack (or some subnet of compute hosts), but are still part
  of a larger L3 

[Yahoo-eng-team] [Bug 1455233] Re: read_seeded broken

2015-05-26 Thread Scott Moser
** Also affects: cloud-init (Ubuntu Vivid)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1455233

Title:
  read_seeded broken

Status in Init scripts for use on cloud images:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Vivid:
  New

Bug description:
  util.read_seeded uses load_tfile_or_url, but then treats the return
  value as if it was a response.

  this regressed in revno 1067.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1455233/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334368] Re: HEAD and GET inconsistencies in Keystone

2015-05-26 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/185469
Committed: 
https://git.openstack.org/cgit/openstack/api-site/commit/?id=b28a90ce696a1826daf243fc3f64c0f1292bddea
Submitter: Jenkins
Branch:master

commit b28a90ce696a1826daf243fc3f64c0f1292bddea
Author: Diane Fleming difle...@cisco.com
Date:   Mon May 25 18:11:27 2015 -0500

Update HEAD methods to return 200 return code

Change-Id: Ia102d5d3e8d86ae1429e3d57bc6c98ad3e2c9072
Closes-Bug: #1334368


** Changed in: openstack-api-site
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1334368

Title:
  HEAD and GET inconsistencies in Keystone

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone icehouse series:
  Fix Released
Status in OpenStack API documentation site:
  Fix Released
Status in Tempest:
  Fix Released

Bug description:
  While trying to convert Keystone to gate/check under mod_wsgi, it was
  noticed that occasionally a few HEAD calls were returning HTTP 200
  where under eventlet they consistently return HTTP 204.

  This is an inconsistency within Keystone. Based upon the RFC, HEAD
  should be identitcal to GET except that there is no body returned.
  Apache + MOD_WSGI in some cases converts a HEAD request to a GET
  request to the back-end wsgi application to avoid issues where the
  headers cannot be built to be sent as part of the response (this can
  occur when no content is returned from the wsgi app).

  This situation shows that Keystone should likely never build specific
  HEAD request methods and have HEAD simply call to the controller GET
  handler, the wsgi-layer should then simply remove the response body.

  This will help to simplify Keystone's code as well as mkae the API
  responses more consistent.

  Example Error in Gate:

  2014-06-25 05:20:37.820 | 
tempest.api.identity.admin.v3.test_trusts.TrustsV3TestJSON.test_trust_expire[gate,smoke]
  2014-06-25 05:20:37.820 | 

  2014-06-25 05:20:37.820 | 
  2014-06-25 05:20:37.820 | Captured traceback:
  2014-06-25 05:20:37.820 | ~~~
  2014-06-25 05:20:37.820 | Traceback (most recent call last):
  2014-06-25 05:20:37.820 |   File 
tempest/api/identity/admin/v3/test_trusts.py, line 241, in test_trust_expire
  2014-06-25 05:20:37.820 | self.check_trust_roles()
  2014-06-25 05:20:37.820 |   File 
tempest/api/identity/admin/v3/test_trusts.py, line 173, in check_trust_roles
  2014-06-25 05:20:37.821 | self.assertEqual('204', resp['status'])
  2014-06-25 05:20:37.821 |   File 
/usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 321, in 
assertEqual
  2014-06-25 05:20:37.821 | self.assertThat(observed, matcher, message)
  2014-06-25 05:20:37.821 |   File 
/usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 406, in 
assertThat
  2014-06-25 05:20:37.821 | raise mismatch_error
  2014-06-25 05:20:37.821 | MismatchError: '204' != '200'

  
  This is likely going to require changes to Keystone, Keystoneclient, Tempest, 
and possibly services that consume data from keystone.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1334368/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1395737] Re: inconsistent use of detail vs details

2015-05-26 Thread utsav dusad
** Changed in: horizon
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1395737

Title:
  inconsistent use of detail vs details

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Horizon titles details pages (or are they detail pages?) sometimes as
  Detail and sometimes as Details.  We should sort out the best word
  to use and use it consistently.

  Discovered during discussion of
  https://review.openstack.org/#/c/136056/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1395737/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451860] Re: Attached volume migration failed, due to incorrect arguments order passed to swap_volume

2015-05-26 Thread Bjoern Teipel
** Also affects: openstack-ansible
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1451860

Title:
  Attached volume migration failed, due to incorrect arguments  order
  passed to swap_volume

Status in OpenStack Compute (Nova):
  Fix Committed
Status in Ansible playbooks for deploying OpenStack:
  New

Bug description:
  Steps to reproduce:
  1. create a volume in cinder
  2. boot a server from image in nova
  3. attach this volume to server
  4. use ' cinder migrate  --force-host-copy True  
3fa956b6-ba59-46df-8a26-97fcbc18fc82 openstack-wangp11-02@pool_backend_1#Pool_1'

  log from nova compute:( see attched from detail info):

  2015-05-05 00:33:31.768 ERROR root [req-b8424cde-e126-41b0-a27a-ef675e0c207f 
admin admin] Original exception being dropped: ['Traceback (most recent ca
  ll last):\n', '  File /opt/stack/nova/nova/compute/manager.py, line 351, in 
decorated_function\nreturn function(self, context, *args, **kwargs)\n
  ', '  File /opt/stack/nova/nova/compute/manager.py, line 4982, in 
swap_volume\ncontext, old_volume_id, instance_uuid=instance.uuid)\n', 
Attribut
  eError: 'unicode' object has no attribute 'uuid'\n]

  
  according to my debug result:
  # here  parameters passed to swap_volume
  def swap_volume(self, ctxt, instance, old_volume_id, new_volume_id):
  return self.manager.swap_volume(ctxt, instance, old_volume_id,
  new_volume_id)
  # swap_volume function
  @wrap_exception()
  @reverts_task_state
  @wrap_instance_fault
  def swap_volume(self, context, old_volume_id, new_volume_id, instance):
  Swap volume for an instance.
  context = context.elevated()

  bdm = objects.BlockDeviceMapping.get_by_volume_id(
  context, old_volume_id, instance_uuid=instance.uuid)
  connector = self.driver.get_volume_connector(instance)

  
  You can find: passed in order is self, ctxt, instance, old_volume_id, 
new_volume_id while function definition is self, context, old_volume_id, 
new_volume_id, instance

  this cause the 'unicode' object has no attribute 'uuid'\n error when
  trying to access instance['uuid']


  BTW: this problem was introduced in
  https://review.openstack.org/#/c/172152

  affect both Kilo and master

  Thanks
  Peter

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1451860/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458913] [NEW] haproxy-driver lock is held while gratuitous arping

2015-05-26 Thread Attila Fazekas
Public bug reported:

https://github.com/openstack/neutron-
lbaas/blob/b868d6d3ef0066a1ac4318d7e91b4d7a076a2e61/neutron_lbaas/drivers/haproxy/namespace_driver.py#L327

arping can take relative long time (2 sec), while the global lock is
held.

The gratuitous arping should not block other threads.

The neutron code base already contains a `non-block` version:
https://github.com/openstack/neutron/blob/6d2794345db2d7b12502f6e7b2d99e05a85b9030/neutron/agent/linux/ip_lib.py#L732

Please do not increase  the lock held time by arping.
Consider using the send_gratuitous_arp function from the ip_lib.py .

PS.:
The same blocking arping also re-implemented in the 
neutron_lbaas/drivers/haproxy/synchronous_namespace_driver.py.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458913

Title:
  haproxy-driver lock is held while gratuitous arping

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  https://github.com/openstack/neutron-
  
lbaas/blob/b868d6d3ef0066a1ac4318d7e91b4d7a076a2e61/neutron_lbaas/drivers/haproxy/namespace_driver.py#L327

  arping can take relative long time (2 sec), while the global lock is
  held.

  The gratuitous arping should not block other threads.

  The neutron code base already contains a `non-block` version:
  
https://github.com/openstack/neutron/blob/6d2794345db2d7b12502f6e7b2d99e05a85b9030/neutron/agent/linux/ip_lib.py#L732

  Please do not increase  the lock held time by arping.
  Consider using the send_gratuitous_arp function from the ip_lib.py .

  PS.:
  The same blocking arping also re-implemented in the 
neutron_lbaas/drivers/haproxy/synchronous_namespace_driver.py.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1458913/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1375252] Re: Hostname change is not preserved across reboot on Azure Ubuntu VMs

2015-05-26 Thread Ben Howard
** Also affects: cloud-init (Ubuntu Vivid)
   Importance: Undecided
   Status: New

** Also affects: walinuxagent (Ubuntu Vivid)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Trusty)
   Importance: Undecided
   Status: New

** Also affects: walinuxagent (Ubuntu Trusty)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Utopic)
   Importance: Undecided
   Status: New

** Also affects: walinuxagent (Ubuntu Utopic)
   Importance: Undecided
   Status: New

** Changed in: walinuxagent (Ubuntu Trusty)
   Status: New = Invalid

** Changed in: walinuxagent (Ubuntu Utopic)
   Status: New = Invalid

** Changed in: walinuxagent (Ubuntu Vivid)
   Status: New = Invalid

** Changed in: cloud-init (Ubuntu)
   Importance: Undecided = Medium

** Changed in: cloud-init (Ubuntu Trusty)
   Importance: Undecided = Medium

** Changed in: cloud-init (Ubuntu Utopic)
   Importance: Undecided = Medium

** Changed in: cloud-init (Ubuntu Vivid)
   Importance: Undecided = Medium

** Changed in: cloud-init (Ubuntu)
 Assignee: (unassigned) = Ben Howard (utlemming)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1375252

Title:
  Hostname change is not preserved across reboot on Azure Ubuntu VMs

Status in Init scripts for use on cloud images:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released
Status in walinuxagent package in Ubuntu:
  Invalid
Status in cloud-init source package in Trusty:
  New
Status in walinuxagent source package in Trusty:
  Invalid
Status in cloud-init source package in Utopic:
  New
Status in walinuxagent source package in Utopic:
  Invalid
Status in cloud-init source package in Vivid:
  New
Status in walinuxagent source package in Vivid:
  Invalid

Bug description:
  Whilst a hostname change is immediately effective using the hostname
  or hostnamectl commands, and changing the hostname this way is
  propagated to the hostname field in the Azure dashboard, upon
  rebooting the Ubuntu VM the hostname reverts to the Virtual Machine
  name as displayed in the Azure dashboard.

  ProblemType: Bug
  DistroRelease: Ubuntu 14.04
  Package: walinuxagent 2.0.8-0ubuntu1~14.04.0
  ProcVersionSignature: Ubuntu 3.13.0-36.63-generic 3.13.11.6
  Uname: Linux 3.13.0-36-generic x86_64
  ApportVersion: 2.14.1-0ubuntu3.4
  Architecture: amd64
  Date: Mon Sep 29 12:48:56 2014
  ProcEnviron:
   TERM=xterm
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=set
   LANG=en_GB.UTF-8
   SHELL=/bin/bash
  SourcePackage: walinuxagent
  UpgradeStatus: No upgrade log present (probably fresh install)
  mtime.conffile..etc.waagent.conf: 2014-09-29T09:37:10.758660

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1375252/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459030] [NEW] Add dns_label to Neutron port

2015-05-26 Thread Carl Baldwin
Public bug reported:

See the spec for more details https://review.openstack.org/#/c/88623

This dns_label field will be used for DNS resolution of the hostname in
dnsmasq and also will be used when Neutron can integrate with external
DNS systems.

** Affects: neutron
 Importance: Undecided
 Assignee: Miguel Lavalle (minsel)
 Status: New


** Tags: rfe

** Changed in: neutron
 Assignee: (unassigned) = Miguel Lavalle (minsel)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1459030

Title:
  Add dns_label to Neutron port

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  See the spec for more details https://review.openstack.org/#/c/88623

  This dns_label field will be used for DNS resolution of the hostname
  in dnsmasq and also will be used when Neutron can integrate with
  external DNS systems.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1459030/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458938] [NEW] Small fix to angular docs

2015-05-26 Thread Thai Tran
Public bug reported:

https://review.openstack.org/#/c/184345/7/doc/source/contributing.rst
Where we change my-controller to my_module.my_controller for consistency.

** Affects: horizon
 Importance: Low
 Status: New


** Tags: documentation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1458938

Title:
  Small fix to angular docs

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  https://review.openstack.org/#/c/184345/7/doc/source/contributing.rst
  Where we change my-controller to my_module.my_controller for consistency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1458938/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458861] Re: Unable to retrieve instances after changing to multi-domain setup

2015-05-26 Thread Davanum Srinivas (DIMS)
The error is coming from keystone -
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/controller.py#n237

** Project changed: nova = keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1458861

Title:
  Unable to retrieve instances after changing to multi-domain setup

Status in OpenStack Identity (Keystone):
  New

Bug description:
  After I changed keystone to multi-domain driver, I get on the horizon
  dashboard following error message when i want to display instances:
  Error: Unauthorized:. Unable to retrieve instances

  Name  : openstack-nova-api
  Arch: noarch
  Version   : 2014.2.2

  /var/log/nova/nova.log
  2015-05-26 14:09:44.512 2175 WARNING keystonemiddleware.auth_token [-] 
Identity response: {error: {message: Non-default domain is not supported 
(Disable debug mode to suppress these details.), code: 401, title: 
Unauthorized}}
  2015-05-26 14:09:44.512 2175 WARNING keystonemiddleware.auth_token [-] 
Authorization failed for token
  2015-05-26 14:09:44.513 2175 INFO nova.osapi_compute.wsgi.server [-] 
10.0.0.10 GET 
/v2/1d524a0433474fa48eb376d913a80fc1/servers/detail?limit=21project_id=1d524a0433474fa48eb376d913a80fc1
 HTTP/1.1 status: 401 len: 258 time: 0.4322391
  2015-05-26 14:09:44.518 2175 WARNING keystonemiddleware.auth_token [-] Unable 
to find authentication token in headers

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1458861/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458919] [NEW] a race in service report count update

2015-05-26 Thread Liang Chen
Public bug reported:

Update of report_count is not protected in a transaction -
https://github.com/openstack/nova/blob/master/nova/servicegroup/drivers/db.py#L82
.  When multiple service worker processes are used, they may overwrite
each other's report_count update.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1458919

Title:
  a race in service report count update

Status in OpenStack Compute (Nova):
  New

Bug description:
  Update of report_count is not protected in a transaction -
  
https://github.com/openstack/nova/blob/master/nova/servicegroup/drivers/db.py#L82
  .  When multiple service worker processes are used, they may overwrite
  each other's report_count update.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1458919/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458945] [NEW] Use graduated oslo.policy instead of oslo-incubator code

2015-05-26 Thread Samuel de Medeiros Queiroz
Public bug reported:

The Policy code is now be managed as a library, named oslo.policy.

If there is a CVE level defect, deploying a fix should require deploying
a new version of the library, not syncing each individual project.

All the projects in the OpenStack ecosystem that are using the policy
code from oslo-incubator should use the new library.

** Affects: cinder
 Importance: Undecided
 Status: New

** Affects: heat
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: swift
 Importance: Undecided
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Samuel de Medeiros Queiroz (samueldmq)

** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: cinder
   Importance: Undecided
   Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

** Also affects: heat
   Importance: Undecided
   Status: New

** No longer affects: keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1458945

Title:
  Use graduated oslo.policy instead of oslo-incubator code

Status in Cinder:
  New
Status in Orchestration API (Heat):
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New
Status in OpenStack Object Storage (Swift):
  New

Bug description:
  The Policy code is now be managed as a library, named oslo.policy.

  If there is a CVE level defect, deploying a fix should require
  deploying a new version of the library, not syncing each individual
  project.

  All the projects in the OpenStack ecosystem that are using the policy
  code from oslo-incubator should use the new library.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1458945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1437902] Re: nova redeclares the `nova` named exchange zillion times without a real need

2015-05-26 Thread Davanum Srinivas (DIMS)
** Changed in: oslo.messaging
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1437902

Title:
  nova redeclares the `nova` named exchange zillion times without a real
  need

Status in OpenStack Compute (Nova):
  Incomplete
Status in Messaging API for OpenStack:
  Fix Released

Bug description:
  The AMQP broker preserves the exchanges, they are replaced to all broker even 
in non HA mode.
  A transient exchange can disappear ONLY when the user explicitly requests 
it's deletion or when the full rabbit cluster dies.

  More efficient to declare exchanges only when it is really missing.

  Application MUST redeclare the exchange when it was reported as Not Found.
  Note.: The Channel exceptions causes channel termination, but not connection 
termination.
  Application MAY try to redeclare the exchange on connection breakage, it can 
assume the messaging cluster  dead.
  Application SHOULD redeclare the exchange at application start up to verify 
the attributes (Before the first usage).
  Application does not needs to redeclare the exchange in any other cases.

  Now, significant amount of the AMQP request/response-es is
  Exchange.Declare - Exchange.Declare-Ok. (One per publish?)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1437902/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp