[Yahoo-eng-team] [Bug 1435814] [NEW] bad preset on system/resource usage

2015-03-24 Thread Matthias Runge
Public bug reported:

when using a system with a history, the search for metering data
on http://localhost:8000/admin/metering/

takes way too long.

Looking at the past 7 days is most probably not a good idea as *default*
option.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1435814

Title:
  bad preset on system/resource usage

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  when using a system with a history, the search for metering data
  on http://localhost:8000/admin/metering/

  takes way too long.

  Looking at the past 7 days is most probably not a good idea as
  *default* option.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1435814/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435873] [NEW] Debian - Openstack IceHouse - Neutron Error 1054, Unknown column 'routers.enable_snat'

2015-03-24 Thread Wallace
Public bug reported:

Im French my english is bad,

So im on Debian Wheezy 7 and Openstack IceHouse

I followed these tutorials :

• https://fosskb.wordpress.com/2014/06/02/openstack-icehouse-on-debian-
wheezy-single-machine-setup/comment-page-1/#comment-642

• http://docs.openstack.org/juno/install-guide/install/apt-debian
/openstack-install-guide-apt-debian-juno.pdf

And when i arrived on router configuration i have Error, when i try to
create router with this command : neutron router-create demo-router

I have Request Failed: internal server error while processing your
request.

and when i show the log in /var/log/neutron-server.log i have this :

TRACE neutron.api.v2.resource OperationalError: (OperationalError)
(1054, Unknown column 'routers.enable_snat' in 'field list') 'SELECT
count(*) AS count_1 \nFROM (SELECT routers.tenant_id AS
routers_tenant_id, routers.id AS routers_id, routers.name AS
routers_name, routers.status AS routers_status, routers.admin_state_up
AS routers_admin_state_up, routers.gw_port_id AS routers_gw_port_id,
routers.enable_snat AS routers_enable_snat \nFROM routers \nWHERE
routers.tenant_id IN (%s)) AS anon_1'
('8348602c43c44c63b5f161a404afe1da',)


ANY ONE CAN HELP ME PLEASE

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: debian error icehouse neutron openstack routers

** Attachment added: error in log
   
https://bugs.launchpad.net/bugs/1435873/+attachment/4354510/+files/Capture.JPG

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1435873

Title:
  Debian - Openstack IceHouse - Neutron Error 1054, Unknown column
  'routers.enable_snat'

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Im French my english is bad,

  So im on Debian Wheezy 7 and Openstack IceHouse

  I followed these tutorials :

  • https://fosskb.wordpress.com/2014/06/02/openstack-icehouse-on-
  debian-wheezy-single-machine-setup/comment-page-1/#comment-642

  • http://docs.openstack.org/juno/install-guide/install/apt-debian
  /openstack-install-guide-apt-debian-juno.pdf

  And when i arrived on router configuration i have Error, when i try to
  create router with this command : neutron router-create demo-router

  I have Request Failed: internal server error while processing your
  request.

  and when i show the log in /var/log/neutron-server.log i have this :

  TRACE neutron.api.v2.resource OperationalError: (OperationalError)
  (1054, Unknown column 'routers.enable_snat' in 'field list') 'SELECT
  count(*) AS count_1 \nFROM (SELECT routers.tenant_id AS
  routers_tenant_id, routers.id AS routers_id, routers.name AS
  routers_name, routers.status AS routers_status, routers.admin_state_up
  AS routers_admin_state_up, routers.gw_port_id AS routers_gw_port_id,
  routers.enable_snat AS routers_enable_snat \nFROM routers \nWHERE
  routers.tenant_id IN (%s)) AS anon_1'
  ('8348602c43c44c63b5f161a404afe1da',)

  
  ANY ONE CAN HELP ME PLEASE

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1435873/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435855] Re: Default rule does not work in ceilometer policy.json

2015-03-24 Thread Matthew Edmonds
** Also affects: ceilometer
   Importance: Undecided
   Status: New

** No longer affects: ceilometer

** Project changed: keystone = ceilometer

** Changed in: ceilometer
   Status: Incomplete = New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1435855

Title:
  Default rule does not work in ceilometer policy.json

Status in OpenStack Telemetry (Ceilometer):
  In Progress

Bug description:
  The rule default does not work for ceilometer. I tried with few of
  these and they don't work. I am able to proceed with the REST apis
  that are not mentioned even when the default is set to not_allowed.

  default: not_allowed:True,
  default: !,

  The problem appears to be here /usr/lib/python2.7/site-
  packages/ceilometer/api/rbac.py

  for rule_name in _ENFORCER.rules.keys():
  if rule_method == rule_name:
  if not _ENFORCER.enforce(
  rule_name,
  {},
  policy_dict):
  pecan.core.abort(status_code=403,
   detail='RBAC Authorization Failed')

  
  The rbac.enforce method loops through all the rules and filters the one that 
matches the one requested for. However , in a case where the rule has not been 
specified in the policy.json file , there is no logic in the above to fall back 
on the default value. The default logic is already taken case of by oslo_policy 
and the above loop seems to be causing the problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1435855/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1430057] Re: Fake instance stuck in MIGRATING state

2015-03-24 Thread Sean Dague
This is just beyond scope of the current fake driver. Please feel free
to push enhancements.

** Changed in: nova
   Status: New = Invalid

** Changed in: nova
   Importance: Undecided = Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1430057

Title:
  Fake instance stuck in MIGRATING state

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I am using FakeDriver.

  It seems that when concurrent resize and live-migration operations, a
  fake instance can remain stucked in MIGRATING state.

  To reproduce the bug, I spawned a fake instance and run a script that resized 
it to random flavors every second. Concurrently, from another node, I run a 
script that tried to live-migrate the instance to another host (every 1.2 
seconds).
  Most of the times messages were something like 'cannot migrate instance in 
state VERIFY_RESIZE', but, when live-migration succeeded the instance was stuck 
in MIGRATING status and needed a `nova refresh --active` command.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1430057/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435743] Re: fwaasrouterinsertion extension information are available in Horizon

2015-03-24 Thread Cedric Brandily
** Changed in: horizon
   Status: In Progress = Invalid

** Changed in: horizon
 Assignee: Cedric Brandily (cbrandily) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1435743

Title:
  fwaasrouterinsertion extension information are available in Horizon

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  fwaasrouterinsertion extension allows to set/unset which routers
  implement a firewall on create/update.

  http://specs.openstack.org/openstack/neutron-specs/specs/kilo/fwaas-
  router-insertion.html#rest-api-impact

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1435743/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435855] [NEW] Default rule does not work in ceilometer policy.json

2015-03-24 Thread Divya K Konoor
Public bug reported:

The rule default does not work for ceilometer. I tried with few of these
and they don't work. I am able to proceed with the REST apis that are
not mentioned even when the default is set to not_allowed.

default: not_allowed:True,
default: !,

The problem appears to be here /usr/lib/python2.7/site-
packages/ceilometer/api/rbac.py

for rule_name in _ENFORCER.rules.keys():
if rule_method == rule_name:
if not _ENFORCER.enforce(
rule_name,
{},
policy_dict):
pecan.core.abort(status_code=403,
 detail='RBAC Authorization Failed')


The rbac.enforce method loops through all the rules and filters the one that 
matches the one requested for. However , in a case where the rule has not been 
specified in the policy.json file , there is no logic in the above to fall back 
on the default value. The default logic is already taken case of by oslo_policy 
and the above loop seems to be causing the problem.

** Affects: keystone
 Importance: Undecided
 Assignee: Divya K Konoor (dikonoor)
 Status: Incomplete

** Changed in: keystone
 Assignee: (unassigned) = Divya K Konoor (dikonoor)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1435855

Title:
  Default rule does not work in ceilometer policy.json

Status in OpenStack Identity (Keystone):
  Incomplete

Bug description:
  The rule default does not work for ceilometer. I tried with few of
  these and they don't work. I am able to proceed with the REST apis
  that are not mentioned even when the default is set to not_allowed.

  default: not_allowed:True,
  default: !,

  The problem appears to be here /usr/lib/python2.7/site-
  packages/ceilometer/api/rbac.py

  for rule_name in _ENFORCER.rules.keys():
  if rule_method == rule_name:
  if not _ENFORCER.enforce(
  rule_name,
  {},
  policy_dict):
  pecan.core.abort(status_code=403,
   detail='RBAC Authorization Failed')

  
  The rbac.enforce method loops through all the rules and filters the one that 
matches the one requested for. However , in a case where the rule has not been 
specified in the policy.json file , there is no logic in the above to fall back 
on the default value. The default logic is already taken case of by oslo_policy 
and the above loop seems to be causing the problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1435855/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1426543] Re: Spike in DBDeadlock errors in update_floatingip_statuses since 2/27

2015-03-24 Thread Joe Gordon
** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1426543

Title:
  Spike in DBDeadlock errors in update_floatingip_statuses since 2/27

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  http://logs.openstack.org/40/122240/19/gate/gate-tempest-dsvm-neutron-
  full/4ef0a02/logs/screen-n-cpu.txt.gz?level=TRACE#_2015-02-27_18_05_22_444

  2015-02-27 18:05:22.444 8433 ERROR nova.compute.manager [-] Instance failed 
network setup after 1 attempt(s)
  2015-02-27 18:05:22.444 8433 TRACE nova.compute.manager Traceback (most 
recent call last):
  2015-02-27 18:05:22.444 8433 TRACE nova.compute.manager   File 
/opt/stack/new/nova/nova/compute/manager.py, line 1684, in 
_allocate_network_async
  2015-02-27 18:05:22.444 8433 TRACE nova.compute.manager 
dhcp_options=dhcp_options)
  2015-02-27 18:05:22.444 8433 TRACE nova.compute.manager   File 
/opt/stack/new/nova/nova/network/neutronv2/api.py, line 395, in 
allocate_for_instance
  2015-02-27 18:05:22.444 8433 TRACE nova.compute.manager net_ids, 
neutron=neutron)
  2015-02-27 18:05:22.444 8433 TRACE nova.compute.manager   File 
/opt/stack/new/nova/nova/network/neutronv2/api.py, line 226, in 
_get_available_networks
  2015-02-27 18:05:22.444 8433 TRACE nova.compute.manager nets = 
neutron.list_networks(**search_opts).get('networks', [])
  2015-02-27 18:05:22.444 8433 TRACE nova.compute.manager   File 
/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py, line 99, 
in with_params
  2015-02-27 18:05:22.444 8433 TRACE nova.compute.manager ret = 
self.function(instance, *args, **kwargs)
  2015-02-27 18:05:22.444 8433 TRACE nova.compute.manager   File 
/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py, line 
524, in list_networks
  2015-02-27 18:05:22.444 8433 TRACE nova.compute.manager **_params)
  2015-02-27 18:05:22.444 8433 TRACE nova.compute.manager   File 
/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py, line 
304, in list
  2015-02-27 18:05:22.444 8433 TRACE nova.compute.manager for r in 
self._pagination(collection, path, **params):
  2015-02-27 18:05:22.444 8433 TRACE nova.compute.manager   File 
/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py, line 
317, in _pagination
  2015-02-27 18:05:22.444 8433 TRACE nova.compute.manager res = 
self.get(path, params=params)
  2015-02-27 18:05:22.444 8433 TRACE nova.compute.manager   File 
/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py, line 
290, in get
  2015-02-27 18:05:22.444 8433 TRACE nova.compute.manager headers=headers, 
params=params)
  2015-02-27 18:05:22.444 8433 TRACE nova.compute.manager   File 
/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py, line 
267, in retry_request
  2015-02-27 18:05:22.444 8433 TRACE nova.compute.manager headers=headers, 
params=params)
  2015-02-27 18:05:22.444 8433 TRACE nova.compute.manager   File 
/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py, line 
197, in do_request
  2015-02-27 18:05:22.444 8433 TRACE nova.compute.manager 
content_type=self.content_type())
  2015-02-27 18:05:22.444 8433 TRACE nova.compute.manager   File 
/usr/local/lib/python2.7/dist-packages/neutronclient/client.py, line 172, in 
do_request
  2015-02-27 18:05:22.444 8433 TRACE nova.compute.manager **kwargs)
  2015-02-27 18:05:22.444 8433 TRACE nova.compute.manager   File 
/usr/local/lib/python2.7/dist-packages/neutronclient/client.py, line 108, in 
_cs_request
  2015-02-27 18:05:22.444 8433 TRACE nova.compute.manager raise 
exceptions.ConnectionFailed(reason=e)
  2015-02-27 18:05:22.444 8433 TRACE nova.compute.manager ConnectionFailed: 
Connection to neutron failed: HTTPConnectionPool(host='127.0.0.1', port=9696): 
Max retries exceeded with url: 
/v2.0/networks.json?tenant_id=1e707ab2d2be40a1902f3352c91a615ashared=False 
(Caused by ReadTimeoutError(HTTPConnectionPool(host='127.0.0.1', port=9696): 
Read timed out. (read timeout=30),))
  2015-02-27 18:05:22.444 8433 TRACE nova.compute.manager 


  http://goo.gl/UMI2vZ

  120 hits in the last 24 hours.  There aren't any new python-
  neutronclient releases in that time so that's not it and there are no
  changes to nova.network.neutronv2 in the last 24 hours, so have to
  look at recent compute manager changes.

  There were a few releases of the requests library this week but those
  were on 2/23 and 2/24 so if that was it we should have seen this by
  now.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1426543/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : 

[Yahoo-eng-team] [Bug 1435507] Re: Compute API v2.0 vs v2.1 labelling

2015-03-24 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/166979
Committed: 
https://git.openstack.org/cgit/openstack/api-site/commit/?id=bbf59fea1960039845eeb709cdb1558bfd53ff1e
Submitter: Jenkins
Branch:master

commit bbf59fea1960039845eeb709cdb1558bfd53ff1e
Author: Davanum Srinivas dava...@gmail.com
Date:   Mon Mar 23 15:22:42 2015 -0400

Switch labels on Compute API versions

Per email discussion, v2.1 is to be promoted for Nova as the
current version.

Change-Id: Ibe990ec93d8f9d18ef21c28979e180472df6a33d
Closes-Bug: #1435507


** Changed in: openstack-api-site
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1435507

Title:
  Compute API v2.0 vs v2.1 labelling

Status in OpenStack Compute (Nova):
  In Progress
Status in OpenStack API documentation site:
  Fix Released

Bug description:
  Hi,

  Some feedback from the Nova team,

  http://developer.openstack.org/api-ref-compute-v2.1.html should be marked 
CURRENT
  http://developer.openstack.org/api-ref-compute-v2-ext.html should be marked 
SUPPORTED

  when we release kilo, based on feedback collected at:
  http://markmail.org/message/p32p5jbvvjedg657

  thanks,
  dims

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1435507/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424061] Re: keystone server should default to localhost-only

2015-03-24 Thread gordon chung
see sileht's comment: https://review.openstack.org/#/c/158523/

** Changed in: ceilometer
 Assignee: Eric Brown (ericwb) = (unassigned)

** Changed in: ceilometer
   Status: In Progress = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1424061

Title:
  keystone server should default to localhost-only

Status in OpenStack Telemetry (Ceilometer):
  Won't Fix
Status in OpenStack Identity (Keystone):
  Won't Fix

Bug description:
  
  By default keystone will listen on all interfaces. Keystone should use secure 
defaults. In this case, listen on localhost-only by default.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1424061/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435852] [NEW] Use first() instead of one() in tunnel endpoint query

2015-03-24 Thread Romil Gupta
Public bug reported:

Consider running neutron-server in the HA mode, Thread A is trying to delete 
the endpoint for tunnel_ip=10.0.0.2. 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/type_tunnel.py#L243
whereas, Thread B is trying to add the endpoint for tunnel_ip=10.0.0.2 which is 
already existing so it will fall in except db_exc.DBDuplicateEntry and look for 
ip_address. But Thread A could possibly delete it since both threads are async. 
In that case, the query will raise an exception if we use one() instead of 
first().

** Affects: neutron
 Importance: Undecided
 Assignee: Romil Gupta (romilg)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Romil Gupta (romilg)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1435852

Title:
  Use first() instead of one() in tunnel endpoint query

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Consider running neutron-server in the HA mode, Thread A is trying to delete 
the endpoint for tunnel_ip=10.0.0.2. 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/type_tunnel.py#L243
  whereas, Thread B is trying to add the endpoint for tunnel_ip=10.0.0.2 which 
is already existing so it will fall in except db_exc.DBDuplicateEntry and look 
for ip_address. But Thread A could possibly delete it since both threads are 
async. In that case, the query will raise an exception if we use one() instead 
of first().

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1435852/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1287824] Re: l3 agent makes too many individual sudo/ip netns calls

2015-03-24 Thread Assaf Muller
rootwra-daemon-mode was merged.

** Changed in: neutron
   Status: Confirmed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1287824

Title:
  l3 agent makes too many individual sudo/ip netns calls

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  Basically, calls to sudo, root_wrap, and ip netns exec all add
  overhead that can make these calls very expensive.  Developing an
  effecting way of consolidating these calls in to considerably fewer
  calls will be a big win.  This assumes the mechanism for consolidating
  them does not itself add a lot of overhead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1287824/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1417421] Re: br100s of multi host have same IP address

2015-03-24 Thread Sean Dague
This seems to be a support request

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1417421

Title:
  br100s of multi host have same IP address

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Hi, every one. My OpenStack runs with nova-network, and set FlatDHCP
  and multi-host supported. However, the br100 of all compute nodes have
  same IP address (e.g. all are 172.16.0.1). So, the VMs on the
  different compute nodes cann't communicate. I ask how to configure
  OpenStack that the br100s have different gateway IP address ? Thank
  you very much.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1417421/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435869] [NEW] [Launch Instance Fix] Establish Baseline Unit Tests

2015-03-24 Thread Matt Borland
Public bug reported:

The Angular work done for Launch Instance should have a baseline set of
unit tests for each of the steps and the infrastructure supporting them
(model, wizard implementation, etc.).

These tests should ensure that each component has basic tests in their
associated .spec.js files.

The expectations for such components would be:

Controllers:
 - test for name
 - test injected elements (maybe inherent in other tests)
 - test each exposed logic function (whether on scope or on controller object)

Filters:
 - should be tested for expected object
 - should be tested for lack of expected object (undefined, null, etc. 
preferably using angular's presence functions)

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1435869

Title:
  [Launch Instance Fix] Establish Baseline Unit Tests

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The Angular work done for Launch Instance should have a baseline set
  of unit tests for each of the steps and the infrastructure supporting
  them (model, wizard implementation, etc.).

  These tests should ensure that each component has basic tests in their
  associated .spec.js files.

  The expectations for such components would be:

  Controllers:
   - test for name
   - test injected elements (maybe inherent in other tests)
   - test each exposed logic function (whether on scope or on controller object)

  Filters:
   - should be tested for expected object
   - should be tested for lack of expected object (undefined, null, etc. 
preferably using angular's presence functions)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1435869/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435803] [NEW] Some compute manager tests take excessively long time to complete due to conductor timeouts

2015-03-24 Thread Hans Lindgren
Public bug reported:

Some compute manager tests that exercise the exception behavior of
methods combined with using a somewhat real instance parameter when
doing so take very long time to complete. This happens if the method
being tested has the @revert_task_state decorator because it will try to
update the instance using a conductor call when there is no conductor
service listening.

By setting the conductor use_local flag for those tests I am able to
reduce the total test time with 4 full minutes when run locally.

** Affects: nova
 Importance: Low
 Assignee: Hans Lindgren (hanlind)
 Status: Incomplete


** Tags: testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1435803

Title:
  Some compute manager tests take excessively long time to complete due
  to conductor timeouts

Status in OpenStack Compute (Nova):
  Incomplete

Bug description:
  Some compute manager tests that exercise the exception behavior of
  methods combined with using a somewhat real instance parameter when
  doing so take very long time to complete. This happens if the method
  being tested has the @revert_task_state decorator because it will try
  to update the instance using a conductor call when there is no
  conductor service listening.

  By setting the conductor use_local flag for those tests I am able to
  reduce the total test time with 4 full minutes when run locally.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1435803/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435748] [NEW] save method is getting called two times in 'attach' api

2015-03-24 Thread Abhijeet Malawade
Public bug reported:


'save' method is getting called two times in 'attach' method of class 
'DriverVolumeBlockDevice'.
(https://github.com/openstack/nova/blob/master/nova/virt/block_device.py#L224)
It is getting called from decorator 'update_db' and from attach method itself.

There is no need of decorator 'update_db' for attach method as 'save' is 
already 
called in attach method.

Note: save method will not update db if there is no change in bdm object

** Affects: nova
 Importance: Undecided
 Assignee: Abhijeet Malawade (abhijeet-malawade)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Abhijeet Malawade (abhijeet-malawade)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1435748

Title:
  save method is getting called two times in 'attach' api

Status in OpenStack Compute (Nova):
  New

Bug description:
  
  'save' method is getting called two times in 'attach' method of class 
'DriverVolumeBlockDevice'.
  (https://github.com/openstack/nova/blob/master/nova/virt/block_device.py#L224)
  It is getting called from decorator 'update_db' and from attach method itself.

  There is no need of decorator 'update_db' for attach method as 'save' is 
already 
  called in attach method.

  Note: save method will not update db if there is no change in bdm
  object

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1435748/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435811] [NEW] Quota charts should be hidden if quota is unlimited

2015-03-24 Thread Rob Cresswell
Public bug reported:

The quota charts in Horizon (Project - Compute - Overview) should be
hidden for unlimited quotas, as they convey no useful information

** Affects: horizon
 Importance: Undecided
 Assignee: Bradley Jones (bradjones)
 Status: New


** Tags: low-hanging-fruit ux

** Tags added: low-hanging-fruit ux

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1435811

Title:
  Quota charts should be hidden if quota is unlimited

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The quota charts in Horizon (Project - Compute - Overview) should be
  hidden for unlimited quotas, as they convey no useful information

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1435811/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435744] [NEW] UnicodeEncodeError in Creating instance

2015-03-24 Thread Harley Ren
Public bug reported:

Launch instance failed , horizon error message : Danger: There was an
error submitting the form. Please try again.

Apache logs:

[Tue Mar 24 17:31:32.200854 2015] [:error] [pid 23314:tid 140209584568064] 
Internal Server Error: /horizon/project/instances/launch
[Tue Mar 24 17:31:32.200922 2015] [:error] [pid 23314:tid 140209584568064] 
Traceback (most recent call last):
[Tue Mar 24 17:31:32.200931 2015] [:error] [pid 23314:tid 140209584568064]   
File /usr/lib/python2.7/dist-packages/django/core/handlers/base.py, line 112, 
in get_response
[Tue Mar 24 17:31:32.200939 2015] [:error] [pid 23314:tid 140209584568064] 
response = wrapped_callback(request, *callback_args, **callback_kwargs)
[Tue Mar 24 17:31:32.200945 2015] [:error] [pid 23314:tid 140209584568064]   
File /usr/lib/python2.7/dist-packages/horizon/decorators.py, line 36, in dec
[Tue Mar 24 17:31:32.200952 2015] [:error] [pid 23314:tid 140209584568064] 
return view_func(request, *args, **kwargs)
[Tue Mar 24 17:31:32.200958 2015] [:error] [pid 23314:tid 140209584568064]   
File /usr/lib/python2.7/dist-packages/horizon/decorators.py, line 52, in dec
[Tue Mar 24 17:31:32.200964 2015] [:error] [pid 23314:tid 140209584568064] 
return view_func(request, *args, **kwargs)
[Tue Mar 24 17:31:32.200970 2015] [:error] [pid 23314:tid 140209584568064]   
File /usr/lib/python2.7/dist-packages/horizon/decorators.py, line 36, in dec
[Tue Mar 24 17:31:32.200977 2015] [:error] [pid 23314:tid 140209584568064] 
return view_func(request, *args, **kwargs)
[Tue Mar 24 17:31:32.200983 2015] [:error] [pid 23314:tid 140209584568064]   
File /usr/lib/python2.7/dist-packages/horizon/decorators.py, line 84, in dec
[Tue Mar 24 17:31:32.200989 2015] [:error] [pid 23314:tid 140209584568064] 
return view_func(request, *args, **kwargs)
[Tue Mar 24 17:31:32.200995 2015] [:error] [pid 23314:tid 140209584568064]   
File /usr/lib/python2.7/dist-packages/django/views/generic/base.py, line 69, 
in view
[Tue Mar 24 17:31:32.201002 2015] [:error] [pid 23314:tid 140209584568064] 
return self.dispatch(request, *args, **kwargs)
[Tue Mar 24 17:31:32.201008 2015] [:error] [pid 23314:tid 140209584568064]   
File /usr/lib/python2.7/dist-packages/django/views/generic/base.py, line 87, 
in dispatch
[Tue Mar 24 17:31:32.201014 2015] [:error] [pid 23314:tid 140209584568064] 
return handler(request, *args, **kwargs)
[Tue Mar 24 17:31:32.201020 2015] [:error] [pid 23314:tid 140209584568064]   
File /usr/lib/python2.7/dist-packages/horizon/workflows/views.py, line 165, 
in post
[Tue Mar 24 17:31:32.201027 2015] [:error] [pid 23314:tid 140209584568064] 
context = self.get_context_data(**kwargs)
[Tue Mar 24 17:31:32.201033 2015] [:error] [pid 23314:tid 140209584568064]   
File /usr/lib/python2.7/dist-packages/horizon/workflows/views.py, line 89, in 
get_context_data
[Tue Mar 24 17:31:32.201039 2015] [:error] [pid 23314:tid 140209584568064] 
workflow = self.get_workflow()
[Tue Mar 24 17:31:32.201045 2015] [:error] [pid 23314:tid 140209584568064]   
File /usr/lib/python2.7/dist-packages/horizon/workflows/views.py, line 79, in 
get_workflow
[Tue Mar 24 17:31:32.201051 2015] [:error] [pid 23314:tid 140209584568064] 
entry_point=entry_point)
[Tue Mar 24 17:31:32.201069 2015] [:error] [pid 23314:tid 140209584568064]   
File /usr/lib/python2.7/dist-packages/horizon/workflows/base.py, line 648, in 
__init__
[Tue Mar 24 17:31:32.201075 2015] [:error] [pid 23314:tid 140209584568064] 
valid = step.action.is_valid()
[Tue Mar 24 17:31:32.201081 2015] [:error] [pid 23314:tid 140209584568064]   
File /usr/lib/python2.7/dist-packages/django/forms/forms.py, line 129, in 
is_valid
[Tue Mar 24 17:31:32.201086 2015] [:error] [pid 23314:tid 140209584568064] 
return self.is_bound and not bool(self.errors)
[Tue Mar 24 17:31:32.201092 2015] [:error] [pid 23314:tid 140209584568064]   
File /usr/lib/python2.7/dist-packages/django/forms/forms.py, line 121, in 
errors
[Tue Mar 24 17:31:32.201097 2015] [:error] [pid 23314:tid 140209584568064] 
self.full_clean()
[Tue Mar 24 17:31:32.201103 2015] [:error] [pid 23314:tid 140209584568064]   
File /usr/lib/python2.7/dist-packages/django/forms/forms.py, line 274, in 
full_clean
[Tue Mar 24 17:31:32.201108 2015] [:error] [pid 23314:tid 140209584568064] 
self._clean_form()
[Tue Mar 24 17:31:32.201113 2015] [:error] [pid 23314:tid 140209584568064]   
File /usr/lib/python2.7/dist-packages/django/forms/forms.py, line 300, in 
_clean_form
[Tue Mar 24 17:31:32.201119 2015] [:error] [pid 23314:tid 140209584568064] 
self.cleaned_data = self.clean()
[Tue Mar 24 17:31:32.201124 2015] [:error] [pid 23314:tid 140209584568064]   
File 
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/instances/workflows/create_instance.py,
 line 167, in clean
[Tue Mar 24 17:31:32.201131 2015] [:error] [pid 23314:tid 140209584568064] 
usages = quotas.tenant_quota_usages(self.request)

[Yahoo-eng-team] [Bug 1370116] Re: Allow sending nova service-disable/enable to Hypervisor

2015-03-24 Thread Rob Cresswell
Addressed by: https://review.openstack.org/#/c/135491/

** Changed in: horizon
   Status: New = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1370116

Title:
  Allow sending nova service-disable/enable to Hypervisor

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  under admin - Hypervisors - allow disable and enable the compute
  from horizon.

  to prevent further scheduling of instance launch on a compute we need to run: 
   nova service-disable --reason REASON NODENAME nova-compute

  It would be helpful if we can send this through the hypervisors menu.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1370116/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1434103] Re: SQL schema downgrades are no longer supported

2015-03-24 Thread Sergey Reshetnyak
** Also affects: sahara (Ubuntu)
   Importance: Undecided
   Status: New

** No longer affects: sahara (Ubuntu)

** Also affects: sahara
   Importance: Undecided
   Status: New

** Changed in: sahara
   Status: New = Triaged

** Changed in: sahara
   Importance: Undecided = High

** Changed in: sahara
Milestone: None = kilo-rc1

** Changed in: sahara
 Assignee: (unassigned) = Sergey Reshetnyak (sreshetniak)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1434103

Title:
  SQL schema downgrades are no longer supported

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in OpenStack Data Processing (Sahara):
  Triaged

Bug description:
  Approved cross-project spec: https://review.openstack.org/152337

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1434103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1397890] Re: Missing primary key constraint at endpoint_group.id column

2015-03-24 Thread Victor Sergeyev
*** This bug is a duplicate of bug 1399768 ***
https://bugs.launchpad.net/bugs/1399768

** This bug has been marked a duplicate of bug 1399768
   migration of endpoint_filter fails due to foreign key constraint

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1397890

Title:
  Missing primary key constraint at endpoint_group.id column

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  Most tables should have a primary key, and each table can have only
  ONE primary key. The PRIMARY KEY constraint uniquely identifies each
  record in a database table. The endpoint_group has no primary key. But
  project_endpoint_group table provides a primary key constraint pointed
  to endpoint_group.id column. Such a migration can't be applied with
  any sql backend except SQLite.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1397890/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435743] [NEW] fwaasrouterinsertion extension information are available in Horizon

2015-03-24 Thread Cedric Brandily
Public bug reported:

fwaasrouterinsertion extension allows to set/unset which routers
implement a firewall on create/update.

http://specs.openstack.org/openstack/neutron-specs/specs/kilo/fwaas-
router-insertion.html#rest-api-impact

** Affects: horizon
 Importance: Undecided
 Assignee: Cedric Brandily (cbrandily)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Cedric Brandily (cbrandily)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1435743

Title:
  fwaasrouterinsertion extension information are available in Horizon

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  fwaasrouterinsertion extension allows to set/unset which routers
  implement a firewall on create/update.

  http://specs.openstack.org/openstack/neutron-specs/specs/kilo/fwaas-
  router-insertion.html#rest-api-impact

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1435743/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1436079] [NEW] There is no API samples functional testing for the os-volumes_boot API

2015-03-24 Thread Matt Riedemann
Public bug reported:

We don't have any functional testing for this API extension:

http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/contrib/volumes.py?id=2015.1.0b3#n588

There are unit tests:

http://git.openstack.org/cgit/openstack/nova/tree/nova/tests/unit/api/openstack/compute/contrib/test_volumes.py?id=2015.1.0b3#n125

But no functional tests, which is kind of bad especially given we don't
have any test coverage in Tempest since the test_volume_boot_pattern
scenario test is skipped due to bug 1373513.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: api testing volumes

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1436079

Title:
  There is no API samples functional testing for the os-volumes_boot API

Status in OpenStack Compute (Nova):
  New

Bug description:
  We don't have any functional testing for this API extension:

  
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/contrib/volumes.py?id=2015.1.0b3#n588

  There are unit tests:

  
http://git.openstack.org/cgit/openstack/nova/tree/nova/tests/unit/api/openstack/compute/contrib/test_volumes.py?id=2015.1.0b3#n125

  But no functional tests, which is kind of bad especially given we
  don't have any test coverage in Tempest since the
  test_volume_boot_pattern scenario test is skipped due to bug 1373513.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1436079/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1436107] [NEW] Adding transclude to search-bar for batch actions

2015-03-24 Thread Thai Tran
Public bug reported:

Currently, there is no good way to align the batch/table actions along
with the search bar. The best way is to treat each element in the row as
a table-cell. To achieve this, we transclude the search bar so that it
is possible to embed additional actions.

** Affects: horizon
 Importance: Undecided
 Assignee: Thai Tran (tqtran)
 Status: In Progress


** Tags: angular ui

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1436107

Title:
  Adding transclude to search-bar for batch actions

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Currently, there is no good way to align the batch/table actions along
  with the search bar. The best way is to treat each element in the row
  as a table-cell. To achieve this, we transclude the search bar so that
  it is possible to embed additional actions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1436107/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1434103] Re: SQL schema downgrades are no longer supported

2015-03-24 Thread Brant Knudson
** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1434103

Title:
  SQL schema downgrades are no longer supported

Status in OpenStack Identity (Keystone):
  New
Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in OpenStack Data Processing (Sahara):
  In Progress

Bug description:
  Approved cross-project spec: https://review.openstack.org/152337

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1434103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1436032] Re: connection reset by peer during glance iimage-crete with vcenter backend

2015-03-24 Thread Davanum Srinivas (DIMS)
** Project changed: nova = glance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1436032

Title:
  connection reset by peer during glance iimage-crete with vcenter
  backend

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Environment:
  HA with vCenter hypevizor
  nova-network vlanmanager
  cinder VMwareVcVmdkDriver

  api: '1.0'
  astute_sha: 16b252d93be6aaa73030b8100cf8c5ca6a970a91
  auth_required: true
  build_id: 2014-12-26_14-25-46
  build_number: '58'
  feature_groups:
  - mirantis
  fuellib_sha: fde8ba5e11a1acaf819d402c645c731af450aff0
  fuelmain_sha: 81d38d6f2903b5a8b4bee79ca45a54b76c1361b8
  nailgun_sha: 5f91157daa6798ff522ca9f6d34e7e135f150a90
  ostf_sha: a9afb68710d809570460c29d6c3293219d3624d4
  production: docker
  release: '6.0'
  release_versions:
2014.2-6.0:
  VERSION:
api: '1.0'
astute_sha: 16b252d93be6aaa73030b8100cf8c5ca6a970a91
build_id: 2014-12-26_14-25-46
build_number: '58'
feature_groups:
- mirantis
fuellib_sha: fde8ba5e11a1acaf819d402c645c731af450aff0
fuelmain_sha: 81d38d6f2903b5a8b4bee79ca45a54b76c1361b8
nailgun_sha: 5f91157daa6798ff522ca9f6d34e7e135f150a90
ostf_sha: a9afb68710d809570460c29d6c3293219d3624d4
production: docker
release: '6.0'

  
  2015-03-24 19:33:14.780 57129 ERROR glance_store._drivers.vmware_datastore 
[a9eaa1a6-d60c-4f62-8abc-7d2a18e7d4b5 35eec72551634331b0c00b6d1b10fc8f 
271797c584e2445d83b82fc57697daaf - - -] Failed
   to upload content of image 329a5529-e3d7-478f-b63a-ff33e68b9259
  2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore 
Traceback (most recent call last):
  2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore   
File 
/usr/lib/python2.6/site-packages/glance_store/_drivers/vmware_datastore.py, 
line 346, in add
  2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore
 content=image_file)
  2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore   
File 
/usr/lib/python2.6/site-packages/glance_store/_drivers/vmware_datastore.py, 
line 489, in _get_http_conn
  2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore
 conn.request(method, url, content, headers)
  2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore   
File /usr/lib64/python2.6/httplib.py, line 914, in request
  2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore
 self._send_request(method, url, body, headers)
  2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore   
File /usr/lib64/python2.6/httplib.py, line 954, in _send_request
  2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore
 self.send(body)
  2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore   
File /usr/lib64/python2.6/httplib.py, line 756, in send
  2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore
 self.sock.sendall(data)
  2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore   
File /usr/lib/python2.6/site-packages/eventlet/green/ssl.py, line 137, in 
sendall
  2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore
 v = self.send(data[count:])
  2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore   
File /usr/lib/python2.6/site-packages/eventlet/green/ssl.py, line 113, in send
  2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore
 super(GreenSSLSocket, self).send, data, flags)
  2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore   
File /usr/lib/python2.6/site-packages/eventlet/green/ssl.py, line 80, in 
_call_trampolining
  2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore
 return func(*a, **kw)
  2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore   
File /usr/lib64/python2.6/ssl.py, line 174, in send
  2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore
 v = self._sslobj.write(data)
  2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore 
error: [Errno 104] Connection reset by peer
  2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1436032/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435910] [NEW] Refactor horizon.scss file

2015-03-24 Thread Abishek Subramanian
Public bug reported:

There are currently two instance of the same code being used in the
horizon.scss file in openstack_dashboard for the instance launch and
also the firewall policies (and soon to be third with firewall routers).

Given all three are essentially using the same code - it makes sense to
refactor this file so that all three features can make use of one common
section of code.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1435910

Title:
  Refactor horizon.scss file

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  There are currently two instance of the same code being used in the
  horizon.scss file in openstack_dashboard for the instance launch and
  also the firewall policies (and soon to be third with firewall
  routers).

  Given all three are essentially using the same code - it makes sense
  to refactor this file so that all three features can make use of one
  common section of code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1435910/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435919] [NEW] Traceback on listing security groups

2015-03-24 Thread Eugene Nikanorov
Public bug reported:

The following traceback has been observed in the gate jobs (it doesn't
lead to a job's failure though):

 TRACE neutron.api.v2.resource Traceback (most recent call last):
 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/resource.py, line 83, in resource
 TRACE neutron.api.v2.resource result = method(request=request, **args)
 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/base.py, line 311, in index
 TRACE neutron.api.v2.resource return self._items(request, True, parent_id)
 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/base.py, line 245, in _items
 TRACE neutron.api.v2.resource obj_list = obj_getter(request.context, 
**kwargs)
 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/securitygroups_db.py, line 178, in 
get_security_groups
 TRACE neutron.api.v2.resource self._ensure_default_security_group(context, 
tenant_id)
 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/securitygroups_db.py, line 553, in 
_ensure_default_security_group
 TRACE neutron.api.v2.resource return default_group['security_group_id']
 TRACE neutron.api.v2.resource   File /usr/lib/python2.7/contextlib.py, line 
24, in __exit__
 TRACE neutron.api.v2.resource self.gen.next()
 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/api.py, line 59, in autonested_transaction
 TRACE neutron.api.v2.resource yield tx
 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 482, 
in __exit__
 TRACE neutron.api.v2.resource self.rollback()
 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py, line 
60, in __exit__
 TRACE neutron.api.v2.resource compat.reraise(exc_type, exc_value, exc_tb)
 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 479, 
in __exit__
 TRACE neutron.api.v2.resource self.commit()
 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 382, 
in commit
 TRACE neutron.api.v2.resource self._assert_active(prepared_ok=True)
 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 218, 
in _assert_active
 TRACE neutron.api.v2.resource This Session's transaction has been rolled 
back 
 TRACE neutron.api.v2.resource InvalidRequestError: This Session's transaction 
has been rolled back by a nested rollback() call.  To begin a new transaction, 
issue Session.rollback() first.

Example:

http://logs.openstack.org/17/165117/6/check/check-tempest-dsvm-neutron-
pg/7017248/logs/screen-q-svc.txt.gz?level=TRACE

** Affects: neutron
 Importance: High
 Assignee: Eugene Nikanorov (enikanorov)
 Status: Confirmed

** Description changed:

- The following traceback has been observed in the gate jobs:
+ The following traceback has been observed in the gate jobs (it doesn't
+ lead to a job's failure though):
  
-  TRACE neutron.api.v2.resource Traceback (most recent call last):
-  TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/resource.py, line 83, in resource
-  TRACE neutron.api.v2.resource result = method(request=request, **args)
-  TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/base.py, line 311, in index
-  TRACE neutron.api.v2.resource return self._items(request, True, 
parent_id)
-  TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/base.py, line 245, in _items
-  TRACE neutron.api.v2.resource obj_list = obj_getter(request.context, 
**kwargs)
-  TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/securitygroups_db.py, line 178, in 
get_security_groups
-  TRACE neutron.api.v2.resource 
self._ensure_default_security_group(context, tenant_id)
-  TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/securitygroups_db.py, line 553, in 
_ensure_default_security_group
-  TRACE neutron.api.v2.resource return default_group['security_group_id']
-  TRACE neutron.api.v2.resource   File /usr/lib/python2.7/contextlib.py, 
line 24, in __exit__
-  TRACE neutron.api.v2.resource self.gen.next()
-  TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/api.py, line 59, in autonested_transaction
-  TRACE neutron.api.v2.resource yield tx
-  TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 482, 
in __exit__
-  TRACE neutron.api.v2.resource self.rollback()
-  TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py, line 
60, in __exit__
-  TRACE neutron.api.v2.resource compat.reraise(exc_type, exc_value, exc_tb)
-  TRACE neutron.api.v2.resource   File 

[Yahoo-eng-team] [Bug 1435929] [NEW] Firewalls index page is missing header

2015-03-24 Thread Rob Cresswell
Public bug reported:

See Project - Network - Firewalls

The header Firewalls is missing from this page, which is inconsistent
with the rest of Horizon.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: kilo-rc-potential ux

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1435929

Title:
  Firewalls index page is missing header

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  See Project - Network - Firewalls

  The header Firewalls is missing from this page, which is
  inconsistent with the rest of Horizon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1435929/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435908] [NEW] Refactor horizon.js

2015-03-24 Thread Abishek Subramanian
Public bug reported:

There are currently two instance of the same code being used in the
horizon.js files for the instance launch and also the firewall policies
(and soon to be third with firewall routers).

Given all three are essentially using the same code - it makes sense to
refactor this file so that all three features can make use of one common
section of code.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1435908

Title:
  Refactor horizon.js

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  There are currently two instance of the same code being used in the
  horizon.js files for the instance launch and also the firewall
  policies (and soon to be third with firewall routers).

  Given all three are essentially using the same code - it makes sense
  to refactor this file so that all three features can make use of one
  common section of code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1435908/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427365] Re: openvswitch-agent init script does not source /etc/sysconfig/neutron

2015-03-24 Thread Tom Helander
Hmmm, I had assumed as much but was hoping this was the right place to
report it. I'll keep hunting for the right place to report it then.
Thanks.

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1427365

Title:
  openvswitch-agent init script does not source /etc/sysconfig/neutron

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  The init script '/etc/init.d/openstack-neutron-openvswitch-agent' does
  not source /etc/sysconfig/neutron, causing the ml2 plugin
  configuration to not be read as the default value for
  NEUTRON_PLUGIN_CONF in the init script is
  '/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini'.

  I resolved this problem by adding the source just after
  NEUTRON_PLUGIN_ARGS= (see attached).

  I'm running on openSUSE-13.1 using the SUSE open build service RPM
  repository
  (https://build.opensuse.org/project/show/Cloud:OpenStack:Juno).

  # rpm -qi openstack-neutron-openvswitch-agent
  Name: openstack-neutron-openvswitch-agent
  Version : 2014.2.3.dev28
  Release : 1.1
  Architecture: noarch
  Install Date: Mon Mar  2 11:55:57 2015
  Group   : Development/Languages/Python
  Size: 14893
  License : Apache-2.0
  Signature   : RSA/SHA1, Fri Feb 27 20:08:54 2015, Key ID 893a90dad85f9316
  Source RPM  : openstack-neutron-2014.2.3.dev28-1.1.src.rpm
  Build Date  : Fri Feb 27 20:07:52 2015
  Build Host  : build24
  Relocations : (not relocatable)
  Vendor  : obs://build.opensuse.org/Cloud:OpenStack
  URL : https://launchpad.net/neutron
  Summary : OpenStack Network - Open vSwitch
  Description :
  This package provides the OpenVSwitch Agent.
  Distribution: Cloud:OpenStack:Juno / openSUSE_13.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1427365/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435925] [NEW] Port field in Add Rule modal is missing asterisk

2015-03-24 Thread Rob Cresswell
Public bug reported:

To reproduce:
1. Go to Project - Compute - Access  Security
2. Click Manage Rules for any Security Group, in the Security Groups tab
3. Click Add Rule
4. Try to create the rule with the Port field empty. It is unmarked, and the 
error message is at the top of the form, rather than next to the relevant 
field. This should be tidied up.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: kilo-rc-potential ux

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1435925

Title:
  Port field in Add Rule modal is missing asterisk

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  To reproduce:
  1. Go to Project - Compute - Access  Security
  2. Click Manage Rules for any Security Group, in the Security Groups tab
  3. Click Add Rule
  4. Try to create the rule with the Port field empty. It is unmarked, and the 
error message is at the top of the form, rather than next to the relevant 
field. This should be tidied up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1435925/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1436141] [NEW] Federation get unscoped token from assertion throws : ERROR tuple index out of range

2015-03-24 Thread Haneef Ali
Public bug reported:

Relevant line in the code
  
https://github.com/openstack/keystone/blob/master/keystone/contrib/federation/utils.py#L158

Relevant logs

keystone.contrib.federation.utils): 2015-03-25 02:40:06,920 DEBUG utils process 
rules: [{u'remote': [{u'type': u'openstack_user', u'any_one_of': [u'user1', 
u'admin']}], u'local': [{u'user': {u'name': u'{0}'}}, {u'group': {u'id': 
u'a9b7c29b5e2d4094a66e240d2827c622'}}]}]
(keystone.contrib.federation.utils): 2015-03-25 02:40:06,920 DEBUG utils 
_update_local_mapping direct_maps: 
keystone.contrib.federation.utils.DirectMaps object at 0x7f2665054510
(keystone.contrib.federation.utils): 2015-03-25 02:40:06,920 DEBUG utils 
_update_local_mapping local: {u'user': {u'name': u'{0}'}}
(keystone.contrib.federation.utils): 2015-03-25 02:40:06,920 DEBUG utils 
_update_local_mapping direct_maps: 
keystone.contrib.federation.utils.DirectMaps object at 0x7f2665054510
(keystone.contrib.federation.utils): 2015-03-25 02:40:06,920 DEBUG utils 
_update_local_mapping local: {u'name': u'{0}'}
(keystone.contrib.federation.utils): 2015-03-25 02:40:06,920 DEBUG utils 
__getitem__ []
(keystone.contrib.federation.utils): 2015-03-25 02:40:06,921 DEBUG utils 
__getitem__ 0
(keystone.common.wsgi): 2015-03-25 02:40:06,922 ERROR wsgi __call__ tuple index 
out of range
Traceback (most recent call last):
  File /usr/local/lib/python2.7/dist-packages/keystone/common/wsgi.py, line 
239, in __call__
result = method(context, **params)
  File 
/usr/local/lib/python2.7/dist-packages/keystone/contrib/federation/controllers.py,
 line 267, in federated_authentication
return self.authenticate_for_token(context, auth=auth)
  File /usr/local/lib/python2.7/dist-packages/keystone/auth/controllers.py, 
line 377, in authenticate_for_token
self.authenticate(context, auth_info, auth_context)
  File /usr/local/lib/python2.7/dist-packages/keystone/auth/controllers.py, 
line 502, in authenticate
auth_context)
  File 
/usr/local/lib/python2.7/dist-packages/keystone/auth/plugins/mapped.py, line 
70, in authenticate
self.identity_api)
  File 
/usr/local/lib/python2.7/dist-packages/keystone/auth/plugins/mapped.py, line 
144, in handle_unscoped_token
federation_api, identity_api)
  File 
/usr/local/lib/python2.7/dist-packages/keystone/auth/plugins/mapped.py, line 
193, in apply_mapping_filter
mapped_properties = rule_processor.process(assertion)
  File 
/usr/local/lib/python2.7/dist-packages/keystone/contrib/federation/utils.py, 
line 453, in process
new_local = self._update_local_mapping(local, direct_maps)
  File 
/usr/local/lib/python2.7/dist-packages/keystone/contrib/federation/utils.py, 
line 595, in _update_local_mapping
new_value = self._update_local_mapping(v, direct_maps)
  File 
/usr/local/lib/python2.7/dist-packages/keystone/contrib/federation/utils.py, 
line 597, in _update_local_mapping
new_value = v.format(*direct_maps)
IndexError: tuple index out of range
(keystone.common.wsgi): 2015-03-25 02:40:06,922 ERROR tuple index out of range

** Affects: keystone
 Importance: Undecided
 Status: New

** Summary changed:

- Federation get unscoped token from assertion throws 
(keystone.contrib.federation.utils): 2015-03-25 02:40:06,920 DEBUG utils 
_update_local_mapping direct_maps: 
keystone.contrib.federation.utils.DirectMaps object at 0x7f2665054510 
(keystone.contrib.federation.utils): 2015-03-25 02:40:06,920 DEBUG utils 
_update_local_mapping local: {u'user': {u'name': u'{0}'}} 
(keystone.contrib.federation.utils): 2015-03-25 02:40:06,920 DEBUG utils 
_update_local_mapping direct_maps: 
keystone.contrib.federation.utils.DirectMaps object at 0x7f2665054510 
(keystone.contrib.federation.utils): 2015-03-25 02:40:06,920 DEBUG utils 
_update_local_mapping local: {u'name': u'{0}'} 
(keystone.contrib.federation.utils): 2015-03-25 02:40:06,920 DEBUG utils 
__getitem__ [] (keystone.contrib.federation.utils): 2015-03-25 02:40:06,921 
DEBUG utils __getitem__ 0 (keystone.common.wsgi): 2015-03-25 02:40:06,922 ERROR 
wsgi __call__ tuple index out of range Traceback (most recent call last):   
File /usr/local/lib/python2
 .7/dist-packages/keystone/common/wsgi.py, line 239, in __call__ result = 
method(context, **params)   File 
/usr/local/lib/python2.7/dist-packages/keystone/contrib/federation/controllers.py,
 line 267, in federated_authentication return 
self.authenticate_for_token(context, auth=auth)   File 
/usr/local/lib/python2.7/dist-packages/keystone/auth/controllers.py, line 
377, in authenticate_for_token self.authenticate(context, auth_info, 
auth_context)   File 
/usr/local/lib/python2.7/dist-packages/keystone/auth/controllers.py, line 
502, in authenticate auth_context)   File 
/usr/local/lib/python2.7/dist-packages/keystone/auth/plugins/mapped.py, line 
70, in authenticate self.identity_api)   File 
/usr/local/lib/python2.7/dist-packages/keystone/auth/plugins/mapped.py, line 
144, in handle_unscoped_token federation_api, 

[Yahoo-eng-team] [Bug 1436156] [NEW] DVR agent remove the drop flows of br-int but don't add them

2015-03-24 Thread KaiLin
Public bug reported:

In the dvr mode:
ovs agent will add the drop flow in the  setup_physical_bridges, but in the ovs 
dvr agent will remove all flows, and add the normal flow of table 
LOCAL_SWITCHING, but it doesn't add the drop flow.It may cause some critical 
problem, such as network storm.

** Affects: neutron
 Importance: Undecided
 Assignee: KaiLin (linkai3)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = KaiLin (linkai3)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1436156

Title:
  DVR agent remove the drop flows of br-int but don't add them

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In the dvr mode:
  ovs agent will add the drop flow in the  setup_physical_bridges, but in the 
ovs dvr agent will remove all flows, and add the normal flow of table 
LOCAL_SWITCHING, but it doesn't add the drop flow.It may cause some critical 
problem, such as network storm.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1436156/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1434172] Re: security group create errors without description

2015-03-24 Thread Steve Martinelli
Switched it back to new, let me know if you disagree.

** Also affects: python-novaclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1434172

Title:
  security group create errors without description

Status in OpenStack Compute (Nova):
  New
Status in Python client library for Nova:
  In Progress
Status in OpenStack Command Line Client:
  In Progress

Bug description:
  security group create returns an error without --description supplied.
  This appears to be the server rejecting the request so we should set a
  default value rather than sending None.

    $ openstack security group create qaz
    ERROR: openstack Security group description is not a string or unicode 
(HTTP 400) (Request-ID: req-dee03de3-893a-4d58-bc3d-de87d09c3fb8)

  Sent body:

    {security_group: {name: qaz2, description: null}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1434172/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1434172] Re: security group create errors without description

2015-03-24 Thread Steve Martinelli
@sdague, I made a change to novaclient to ensure the body does not
include the description value.

You can see in the log below that the body is just: {security_group:
{name: tempo}}

But, the result remains the same (400 with description is not a string or 
unicode):
2015-03-25 00:22:48.445 DEBUG nova.api.openstack.wsgi 
[req-0c501906-c662-4afd-a3fc-3b6e5e22caf9 admin admin] Action: 'create', 
calling method: bound method SecurityGroupController.create of 
nova.api.openstack.compute.contrib.security_groups.SecurityGroupController 
object at 0x7f0768586cd0, body: {security_group: {name: tempo}} from 
(pid=14913) _process_stack /opt/stack/nova/nova/api/openstack/wsgi.py:780
2015-03-25 00:22:48.447 INFO nova.api.openstack.wsgi 
[req-0c501906-c662-4afd-a3fc-3b6e5e22caf9 admin admin] HTTP exception thrown: 
Security group description is not a string or unicode
2015-03-25 00:22:48.447 DEBUG nova.api.openstack.wsgi 
[req-0c501906-c662-4afd-a3fc-3b6e5e22caf9 admin admin] Returning 400 to user: 
Security group description is not a string or unicode from (pid=14913) __call__ 
/opt/stack/nova/nova/api/openstack/wsgi.py:1166
2015-03-25 00:22:48.450 INFO nova.osapi_compute.wsgi.server 
[req-0c501906-c662-4afd-a3fc-3b6e5e22caf9 admin admin] 10.0.2.15 POST 
/v2/36c7f6452c394b44ad4ae1f2bfe07800/os-security-groups HTTP/1.1 status: 400 
len: 317 time: 0.1127591

This is the  change that I made to novaclient

steve:python-novaclient$ git diff
diff --git a/novaclient/v2/security_groups.py b/novaclient/v2/security_groups.py
index 40d1e7f..0cd4960 100644
--- a/novaclient/v2/security_groups.py
+++ b/novaclient/v2/security_groups.py
@@ -45,7 +45,9 @@ class SecurityGroupManager(base.ManagerWithFind):
 :param description: description of the security group
 :rtype: the security group object
 
-body = {security_group: {name: name, 'description': description}}
+body = {security_group: {name: name}}
+if description:
+body['security_group']['description'] = description
 return self._create('/os-security-groups', body, 'security_group')

Double checked that it still works for the case with a description, and
it does.

2015-03-25 00:25:28.870 DEBUG nova.api.openstack.wsgi 
[req-e2be9899-c50a-40fc-9eaa-258db57b5cf3 admin admin] Action: 'create', 
calling method: bound method SecurityGroupController.create of 
nova.api.openstack.compute.contrib.security_groups.SecurityGroupController 
object at 0x7f0768586cd0, body: {security_group: {name: tempo, 
description: tempo_desc}} from (pid=14913) _process_stack 
/opt/stack/nova/nova/api/openstack/wsgi.py:780
2015-03-25 00:25:28.871 DEBUG oslo_db.api 
[req-e2be9899-c50a-40fc-9eaa-258db57b5cf3 admin admin] Loading backend 
'sqlalchemy' from 'nova.db.sqlalchemy.api' from (pid=14913) _load_backend 
/usr/local/lib/python2.7/dist-packages/oslo_db/api.py:214
2015-03-25 00:25:28.872 WARNING oslo_config.cfg 
[req-e2be9899-c50a-40fc-9eaa-258db57b5cf3 admin admin] Option sql_connection 
from group DEFAULT is deprecated. Use option connection from group 
database.
2015-03-25 00:25:28.892 DEBUG oslo_db.sqlalchemy.session 
[req-e2be9899-c50a-40fc-9eaa-258db57b5cf3 admin admin] MySQL server mode set to 
STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
 from (pid=14913) _check_effective_sql_mode 
/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/session.py:513
2015-03-25 00:25:29.089 DEBUG nova.quota 
[req-e2be9899-c50a-40fc-9eaa-258db57b5cf3 admin admin] Created reservations 
['abcd8046-85b8-4a9f-a7ac-258e0050d10c'] from (pid=14913) reserve 
/opt/stack/nova/nova/quota.py:1319
2015-03-25 00:25:29.090 INFO nova.compute.api 
[req-e2be9899-c50a-40fc-9eaa-258db57b5cf3 admin admin] Create Security Group 
tempo

To me it still looks like this call will always fail:

  self.security_group_api.validate_property(group_description,
'description', None)

Since it always attempts to strip the value and match it with something

** Changed in: nova
   Status: Invalid = New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1434172

Title:
  security group create errors without description

Status in OpenStack Compute (Nova):
  New
Status in Python client library for Nova:
  In Progress
Status in OpenStack Command Line Client:
  In Progress

Bug description:
  security group create returns an error without --description supplied.
  This appears to be the server rejecting the request so we should set a
  default value rather than sending None.

    $ openstack security group create qaz
    ERROR: openstack Security group description is not a string or unicode 
(HTTP 400) (Request-ID: req-dee03de3-893a-4d58-bc3d-de87d09c3fb8)

  Sent body:

    {security_group: {name: qaz2, description: null}}

To manage notifications about this 

[Yahoo-eng-team] [Bug 1436166] [NEW] Problems with images bubble up as a simple There are not enough hosts available

2015-03-24 Thread Julian Edwards
Public bug reported:

When starting a new instance, I received the generic There are not
enough hosts available error, but the real reason was buried in logs,
which was that the image I was trying to use was corrupt.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1436166

Title:
  Problems with images bubble up as a simple There are not enough hosts
  available

Status in OpenStack Compute (Nova):
  New

Bug description:
  When starting a new instance, I received the generic There are not
  enough hosts available error, but the real reason was buried in logs,
  which was that the image I was trying to use was corrupt.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1436166/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1436160] [NEW] No test done for nova/virt/libvirt/driver.py in Icehouse release of Nova

2015-03-24 Thread Veena
Public bug reported:

There are no test cases written to test the functionality of
nova/virt/libvirt/driver.py in the stable Icehouse release. There is no
test_driver.py file in nova/tests/virt/libvirt.

Are the functions defined in driver.py tested in some other files?

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: icehouse libvirt testing

** Tags added: icehouse

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1436160

Title:
  No test done for nova/virt/libvirt/driver.py in Icehouse release of
  Nova

Status in OpenStack Compute (Nova):
  New

Bug description:
  There are no test cases written to test the functionality of
  nova/virt/libvirt/driver.py in the stable Icehouse release. There is
  no test_driver.py file in nova/tests/virt/libvirt.

  Are the functions defined in driver.py tested in some other files?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1436160/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1350128] Re: sticky project selection with in progress instance tasks

2015-03-24 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1350128

Title:
  sticky project selection with in progress instance tasks

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  - In project #1 I had a bunch of instances in various states of broken
  and so had terminated them (again).

  - From project #1's Instances page I switched to a different tenancy,
  let's say project #2, the Overview page showed as expected.

  - Then headed to project #2's Instances page, but unexpectedly ended
  up back on project #1's Instance page.

  - That behaviour continued until the previously terminated instances
  in project #1 disappeared and I was then able to view Instances pages
  in other projects again.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1350128/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1400286] Re: Subnet should allow for disabling DNS nameservers (instead of pushing dhcp ip as default)

2015-03-24 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1400286

Title:
  Subnet should allow for disabling DNS nameservers (instead of pushing
  dhcp ip as default)

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  When connecting a VM to more than 1 network interface, defaults of the
  second subnet will override user-defined settings of the first
  (usually primary) interface.

  Reproduce:
  1. create a VM with 2 network interfaces where:
  eth0 - subnet with a GW, and a custom DNS nameserver
  eth1 - secondary network where subnet is created with default settings, 
dhcp enabled
  NOTE: most images will require manually  requesting DHCP on eth1
  2. check routing and DNS details on VM.
  custom settings from primary subnet's DNS, and default GW have been 
overridden by secondary subnet's defaults

  Workarounds:
  1. reverse network settings so that primary interface sends DHCP request last.
  - problematic because usually the most important network and NIC should 
be defined first. Also, some VMs might be connected only to primary, so this 
would create inconsistencies between VMs in the same network
  2. Manually disable defaults on secondary subnet:
  - Works for GW.
  - Doesn't work for DNS, since Neutron configures dnsmasq to push DHCP 
port's IP when no DNS nameserver is defined
  3. Manually set secondary subnet's DNS to match primary's.
  - Not all users have access to the data of all other subnets. Primary 
network might have been created by another user

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1400286/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1436164] [NEW] duplicated l3 scheduler test cases

2015-03-24 Thread YAMAMOTO Takashi
Public bug reported:

L3SchedulerTestCase and L3ChanceSchedulerTestCase have a bunch of
duplicated test cases.

** Affects: neutron
 Importance: Undecided
 Assignee: YAMAMOTO Takashi (yamamoto)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1436164

Title:
  duplicated l3 scheduler test cases

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  L3SchedulerTestCase and L3ChanceSchedulerTestCase have a bunch of
  duplicated test cases.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1436164/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370477] Re: response examples of lists role APIs are lacking

2015-03-24 Thread Steve Martinelli
I'm not seeing an issue on the keystone side, not sure why this was
confirmed. Looks fine to me:

http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#list-roles
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#list-user-s-roles-on-domain
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#list-group-s-roles-on-domain
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#list-user-s-roles-on-project
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#list-group-s-roles-on-project

Closing this as invalid

** Changed in: keystone
   Status: Confirmed = Invalid

** Changed in: keystone
Milestone: kilo-rc1 = None

** Changed in: keystone
 Assignee: Ciaran O Tuathail (ciaran-otuathail) = (unassigned)

** Changed in: keystone
   Importance: Low = Undecided

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1370477

Title:
  response examples of lists role APIs are lacking

Status in OpenStack Identity (Keystone):
  Invalid
Status in OpenStack API documentation site:
  Fix Released

Bug description:
  Some listing roles APIs of responce examples are lacking.

  https://github.com/openstack/keystone-
  specs/blob/5fb080c41937614a867c4471e9b6c8a1c2ee59f0/api/v3/identity-
  api-v3.rst#create-role

  For example:
  Lists role for a user on a domain API's example is

  [
  {
  id: --role-id--,
  name: --role-name--
  },
  {
  id: --role-id--,
  name: --role-name--
  }
  ]

  but the actual API's response is
  {
  links: {
  next: null,
  previous: null,
  self: 
http://192.168.56.10:5000/v3/domains/716d729b22d647e5a04a2405d66c5eff/users/21de7aaafeb9454ba4c5b40b19016199/roles;
  },
  roles: [
  {
  id: 3d4b58f4be7649a497bb18b3f2e25d76,
  links: {
  self: 
http://192.168.56.10:5000/v3/roles/3d4b58f4be7649a497bb18b3f2e25d76;
  },
  name: Member
  }
  ]
  }

  Same lacks exist in
  - Lists roles for a user on a domain.
  - Lists roles for a specified domain group.
  - Lists roles for a user in a project.
  - Lists roles for a project group.
  - Lists roles.
  - Lists policies.

  These API's response examples don't have roles, links, next,
  previous and self keys.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1370477/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1178745] Re: Inconsistent connectivity between instances with floating IPs

2015-03-24 Thread James Page
** Changed in: cloud-archive
   Status: New = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1178745

Title:
  Inconsistent connectivity between instances with floating IPs

Status in Ubuntu Cloud Archive:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  Fix Released
Status in Ubuntu:
  Fix Released

Bug description:
  Communication between instances on the same fixed network using
  assigned floating IP addresses does not behave in a consistent
  fashion. In all-in-one and (possibly) multi-host deployments, creating
  connections using floating IPs appear to work (at least within the
  confines of the security groups). However, with standalone compute
  nodes, instances that are on the same compute node cannot successfully
  create a connection. Routing and matching endpoints seem to be at the
  core of this issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1178745/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1433994] Re: Can't boot instance with fake_virt driver in flavor that has disk size 0

2015-03-24 Thread Sean Dague
disk 0 doesn't mean 0 disk, it means expand dynamically based on the
image. This sounds like a reasonable failure mode.

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1433994

Title:
  Can't boot instance with fake_virt driver in flavor that has disk size
   0

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I set VIRT_DRVER=fake for my devstack.

  And when I try to boot instance with flavor m1.tiny nova scheduler says 
that I don't have disk space -
  ram:799488 disk:0 io_ops:0 instances:0 does not have 1024 MB usable disk, it 
only has 0.0 MB usable disk.

  It happens because scheduler calculates free_gb as minimum from 
'compute.disk_available_least' and 'compute.free_disk_gb' if least is not None
  but fake_virt driver defines 'disk_available_least = 0'.

  Maybe this is not a bug about booting instance but maybe a bug about
  list of flavors for fake_virt driver. (or friendly message that this
  operation can not be proceeded).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1433994/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384555] Re: SQL error during alembic.migration when populating Neutron database on MariaDB 10.0

2015-03-24 Thread James Page
** Changed in: neutron (Ubuntu)
   Importance: Undecided = Medium

** Changed in: neutron (Ubuntu)
   Status: Confirmed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1384555

Title:
  SQL error during alembic.migration when populating Neutron database on
  MariaDB 10.0

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released

Bug description:
  On a fresh installation of Juno, it seems that that the database is
  not being populated correctly on a fresh install. This is the output
  of the log (I also demonstrated the DB had no tables to begin with):

  MariaDB [(none)] use neutron
  Database changed
  MariaDB [neutron] show tables;
  Empty set (0.00 sec)

  MariaDB [neutron] quit
  Bye
  root@vm-1:~# neutron-db-manage --config-file /etc/neutron/neutron.conf 
--config-file /etc/neutron/plugin.ini current
  INFO  [alembic.migration] Context impl MySQLImpl.
  INFO  [alembic.migration] Will assume non-transactional DDL.
  Current revision for mysql://neutron:X@10.10.10.1/neutron: None
  root@vm-1:~# neutron-db-manage --config-file /etc/neutron/neutron.conf 
--config-file /etc/neutron/plugin.ini upgrade head
  INFO  [alembic.migration] Context impl MySQLImpl.
  INFO  [alembic.migration] Will assume non-transactional DDL.
  INFO  [alembic.migration] Running upgrade None - havana, havana_initial
  INFO  [alembic.migration] Running upgrade havana - e197124d4b9, add unique 
constraint to members
  INFO  [alembic.migration] Running upgrade e197124d4b9 - 1fcfc149aca4, Add a 
unique constraint on (agent_type, host) columns to prevent a race condition 
when an agent entry is 'upserted'.
  INFO  [alembic.migration] Running upgrade 1fcfc149aca4 - 50e86cb2637a, 
nsx_mappings
  INFO  [alembic.migration] Running upgrade 50e86cb2637a - 1421183d533f, NSX 
DHCP/metadata support
  INFO  [alembic.migration] Running upgrade 1421183d533f - 3d3cb89d84ee, 
nsx_switch_mappings
  INFO  [alembic.migration] Running upgrade 3d3cb89d84ee - 4ca36cfc898c, 
nsx_router_mappings
  INFO  [alembic.migration] Running upgrade 4ca36cfc898c - 27cc183af192, 
ml2_vnic_type
  INFO  [alembic.migration] Running upgrade 27cc183af192 - 50d5ba354c23, ml2 
binding:vif_details
  INFO  [alembic.migration] Running upgrade 50d5ba354c23 - 157a5d299379, ml2 
binding:profile
  INFO  [alembic.migration] Running upgrade 157a5d299379 - 3d2585038b95, 
VMware NSX rebranding
  INFO  [alembic.migration] Running upgrade 3d2585038b95 - abc88c33f74f, lb 
stats
  INFO  [alembic.migration] Running upgrade abc88c33f74f - 1b2580001654, 
nsx_sec_group_mapping
  INFO  [alembic.migration] Running upgrade 1b2580001654 - e766b19a3bb, 
nuage_initial
  INFO  [alembic.migration] Running upgrade e766b19a3bb - 2eeaf963a447, 
floatingip_status
  INFO  [alembic.migration] Running upgrade 2eeaf963a447 - 492a106273f8, 
Brocade ML2 Mech. Driver
  INFO  [alembic.migration] Running upgrade 492a106273f8 - 24c7ea5160d7, Cisco 
CSR VPNaaS
  INFO  [alembic.migration] Running upgrade 24c7ea5160d7 - 81c553f3776c, 
bsn_consistencyhashes
  INFO  [alembic.migration] Running upgrade 81c553f3776c - 117643811bca, nec: 
delete old ofc mapping tables
  INFO  [alembic.migration] Running upgrade 117643811bca - 19180cf98af6, 
nsx_gw_devices
  INFO  [alembic.migration] Running upgrade 19180cf98af6 - 33dd0a9fa487, 
embrane_lbaas_driver
  INFO  [alembic.migration] Running upgrade 33dd0a9fa487 - 2447ad0e9585, Add 
IPv6 Subnet properties
  INFO  [alembic.migration] Running upgrade 2447ad0e9585 - 538732fa21e1, NEC 
Rename quantum_id to neutron_id
  INFO  [alembic.migration] Running upgrade 538732fa21e1 - 5ac1c354a051, n1kv 
segment allocs for cisco n1kv plugin
  INFO  [alembic.migration] Running upgrade 5ac1c354a051 - icehouse, icehouse
  INFO  [alembic.migration] Running upgrade icehouse - 54f7549a0e5f, 
set_not_null_peer_address
  INFO  [alembic.migration] Running upgrade 54f7549a0e5f - 1e5dd1d09b22, 
set_not_null_fields_lb_stats
  INFO  [alembic.migration] Running upgrade 1e5dd1d09b22 - b65aa907aec, 
set_length_of_protocol_field
  INFO  [alembic.migration] Running upgrade b65aa907aec - 33c3db036fe4, 
set_length_of_description_field_metering
  INFO  [alembic.migration] Running upgrade 33c3db036fe4 - 4eca4a84f08a, 
Remove ML2 Cisco Credentials DB
  INFO  [alembic.migration] Running upgrade 4eca4a84f08a - d06e871c0d5, 
set_admin_state_up_not_null_ml2
  INFO  [alembic.migration] Running upgrade d06e871c0d5 - 6be312499f9, 
set_not_null_vlan_id_cisco
  INFO  [alembic.migration] Running upgrade 6be312499f9 - 1b837a7125a9, Cisco 
APIC Mechanism Driver
  INFO  [alembic.migration] Running upgrade 1b837a7125a9 - 10cd28e692e9, 
nuage_extraroute
  INFO  [alembic.migration] Running upgrade 10cd28e692e9 - 2db5203cb7a9, 
nuage_floatingip
  INFO  [alembic.migration] Running upgrade 

[Yahoo-eng-team] [Bug 1433554] Re: DVR: metada network not created for NSX-mh

2015-03-24 Thread Salvatore Orlando
Addressed for stable/juno by: https://review.openstack.org/167295

** Also affects: neutron
   Importance: Undecided
   Status: New

** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
 Assignee: (unassigned) = Salvatore Orlando (salvatore-orlando)

** Changed in: neutron/juno
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1433554

Title:
  DVR: metada network not created for NSX-mh

Status in OpenStack Neutron (virtual network service):
  New
Status in neutron juno series:
  In Progress
Status in VMware NSX:
  Fix Committed

Bug description:
  When creating a distributed router, instances attached to it do not
  have metadata access.

  This is happening because the metadata network is not being created
  and connected to the router - since the process for handling metadata
  network has not been updated with the new interface type for DVR
  router ports.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1433554/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1433553] Re: DVR: remove interface fails on NSX-mh

2015-03-24 Thread Salvatore Orlando
Addressed for stable/juno by: https://review.openstack.org/167295

** Also affects: neutron
   Importance: Undecided
   Status: New

** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
 Assignee: (unassigned) = Salvatore Orlando (salvatore-orlando)

** Changed in: neutron/juno
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1433553

Title:
  DVR: remove interface fails on NSX-mh

Status in OpenStack Neutron (virtual network service):
  New
Status in neutron juno series:
  In Progress
Status in VMware NSX:
  Fix Committed

Bug description:
  The DVR mixin, which the MH plugin is now using, assumes that routers
  are deployed on l3 agents, which is not the case for VMware plugins.

  While it is generally wrong that a backend agnostic management layer
  makes assumptions about the backend, the VMware plugins should work
  around this issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1433553/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1433550] Re: DVR: VMware NSX plugins do not need centralized snat interfaces

2015-03-24 Thread Salvatore Orlando
Addressed for stable/juno by: https://review.openstack.org/167295

** Also affects: neutron
   Importance: Undecided
   Status: New

** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Status: New = In Progress

** Changed in: neutron/juno
 Assignee: (unassigned) = Salvatore Orlando (salvatore-orlando)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1433550

Title:
  DVR: VMware NSX plugins do not need centralized snat interfaces

Status in OpenStack Neutron (virtual network service):
  New
Status in neutron juno series:
  In Progress
Status in VMware NSX:
  Fix Committed

Bug description:
  When creating a distributed router, a centralized SNAT port is
  created.

  However since the NSX backend does not need it to implement
  distributed routing, this is just a waste of resources (port and IP
  address). Also, it might confuse users with admin privileges as they
  won't know what these ports are doing.

  So even if they do no harm they should be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1433550/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435981] [NEW] PciPassthroughFilter throws exception if host has no pci devices

2015-03-24 Thread Przemyslaw Czesnowicz
Public bug reported:

When booting a VM with pci devices the PciPassthroughFilter will raise
an exception if one of the  hosts doesn't have any assignable pci
devices (because pci_stats is set to None on host_state in that case)


Traceback (most recent call last):
2015-03-24 16:51:46.744 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 142, 
in _dispatch_and_repl
y
2015-03-24 16:51:46.744 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
2015-03-24 16:51:46.744 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 186, 
in _dispatch
2015-03-24 16:51:46.744 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
2015-03-24 16:51:46.744 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 130, 
in _do_dispatch
2015-03-24 16:51:46.744 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2015-03-24 16:51:46.744 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py, line 142, in 
inner
2015-03-24 16:51:46.744 TRACE oslo_messaging.rpc.dispatcher return 
func(*args, **kwargs)
2015-03-24 16:51:46.744 TRACE oslo_messaging.rpc.dispatcher   File 
/shared/stack/nova/nova/scheduler/manager.py, line 86, in select_destinations
2015-03-24 16:51:46.744 TRACE oslo_messaging.rpc.dispatcher 
filter_properties)
2015-03-24 16:51:46.744 TRACE oslo_messaging.rpc.dispatcher   File 
/shared/stack/nova/nova/scheduler/filter_scheduler.py, line 67, in 
select_destinations
2015-03-24 16:51:46.744 TRACE oslo_messaging.rpc.dispatcher 
filter_properties)
2015-03-24 16:51:46.744 TRACE oslo_messaging.rpc.dispatcher   File 
/shared/stack/nova/nova/scheduler/filter_scheduler.py, line 138, in _schedule
2015-03-24 16:51:46.744 TRACE oslo_messaging.rpc.dispatcher 
filter_properties, index=num)
2015-03-24 16:51:46.744 TRACE oslo_messaging.rpc.dispatcher   File 
/shared/stack/nova/nova/scheduler/host_manager.py, line 451, in 
get_filtered_hosts
2015-03-24 16:51:46.744 TRACE oslo_messaging.rpc.dispatcher hosts, 
filter_properties, index)
2015-03-24 16:51:46.744 TRACE oslo_messaging.rpc.dispatcher   File 
/shared/stack/nova/nova/filters.py, line 78, in get_filtered_objects
2015-03-24 16:51:46.744 TRACE oslo_messaging.rpc.dispatcher list_objs = 
list(objs)
2015-03-24 16:51:46.744 TRACE oslo_messaging.rpc.dispatcher   File 
/shared/stack/nova/nova/filters.py, line 44, in filter_all
2015-03-24 16:51:46.744 TRACE oslo_messaging.rpc.dispatcher if 
self._filter_one(obj, filter_properties):
2015-03-24 16:51:46.744 TRACE oslo_messaging.rpc.dispatcher   File 
/shared/stack/nova/nova/scheduler/filters/__init__.py, line 27, in _filter_one
2015-03-24 16:51:46.744 TRACE oslo_messaging.rpc.dispatcher return 
self.host_passes(obj, filter_properties)
2015-03-24 16:51:46.744 TRACE oslo_messaging.rpc.dispatcher   File 
/shared/stack/nova/nova/scheduler/filters/pci_passthrough_filter.py, line 48, 
in host_passes
2015-03-24 16:51:46.744 TRACE oslo_messaging.rpc.dispatcher if not 
host_state.pci_stats.support_requests(pci_requests.requests):
2015-03-24 16:51:46.744 TRACE oslo_messaging.rpc.dispatcher AttributeError: 
'NoneType' object has no attribute 'support_requests'

** Affects: nova
 Importance: Undecided
 Assignee: James Chapman (james-p-chapman)
 Status: New


** Tags: pci-passthrough

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1435981

Title:
  PciPassthroughFilter throws exception if host has no pci devices

Status in OpenStack Compute (Nova):
  New

Bug description:
  When booting a VM with pci devices the PciPassthroughFilter will raise
  an exception if one of the  hosts doesn't have any assignable pci
  devices (because pci_stats is set to None on host_state in that case)

  
  Traceback (most recent call last):
  2015-03-24 16:51:46.744 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 142, 
in _dispatch_and_repl
  y
  2015-03-24 16:51:46.744 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-03-24 16:51:46.744 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 186, 
in _dispatch
  2015-03-24 16:51:46.744 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-03-24 16:51:46.744 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 130, 
in _do_dispatch
  2015-03-24 16:51:46.744 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
  2015-03-24 16:51:46.744 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py, line 

[Yahoo-eng-team] [Bug 1239484] Re: failed nova db migration upgrading from grizzly to havana

2015-03-24 Thread James Page
Grizzly and Havana are now both EOL in the cloud archive - marking won't
fix.

** Changed in: cloud-archive
   Status: New = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1239484

Title:
  failed nova db migration upgrading from grizzly to havana

Status in Ubuntu Cloud Archive:
  Won't Fix
Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  I recently upgraded a Nova cluster from grizzly to havana. We're using
  the Ubuntu Cloud Archive and so in terms of package versions the
  upgrade was from 1:2013.1.3-0ubuntu1~cloud0 to
  1:2013.2~rc2-0ubuntu1~cloud0.  We're using mysql-server-5.5
  5.5.32-0ubuntu0.12.04.1 from Ubuntu 12.04 LTS.

  After the upgrade, nova-manage db sync failed as follows:

  # nova-manage db sync
  2013-10-13 21:08:54.132 26592 INFO migrate.versioning.api [-] 161 - 162...
  2013-10-13 21:08:54.138 26592 INFO migrate.versioning.api [-] done
  2013-10-13 21:08:54.140 26592 INFO migrate.versioning.api [-] 162 - 163...
  2013-10-13 21:08:54.145 26592 INFO migrate.versioning.api [-] done
  2013-10-13 21:08:54.146 26592 INFO migrate.versioning.api [-] 163 - 164...
  2013-10-13 21:08:54.154 26592 INFO migrate.versioning.api [-] done
  2013-10-13 21:08:54.154 26592 INFO migrate.versioning.api [-] 164 - 165...
  2013-10-13 21:08:54.162 26592 INFO migrate.versioning.api [-] done
  2013-10-13 21:08:54.162 26592 INFO migrate.versioning.api [-] 165 - 166...
  2013-10-13 21:08:54.167 26592 INFO migrate.versioning.api [-] done
  2013-10-13 21:08:54.170 26592 INFO migrate.versioning.api [-] 166 - 167...
  2013-10-13 21:08:54.175 26592 INFO migrate.versioning.api [-] done
  2013-10-13 21:08:54.176 26592 INFO migrate.versioning.api [-] 167 - 168...
  2013-10-13 21:08:54.184 26592 INFO migrate.versioning.api [-] done
  2013-10-13 21:08:54.184 26592 INFO migrate.versioning.api [-] 168 - 169...
  2013-10-13 21:08:54.189 26592 INFO migrate.versioning.api [-] done
  2013-10-13 21:08:54.189 26592 INFO migrate.versioning.api [-] 169 - 170...
  2013-10-13 21:08:54.199 26592 INFO migrate.versioning.api [-] done
  2013-10-13 21:08:54.199 26592 INFO migrate.versioning.api [-] 170 - 171...
  2013-10-13 21:08:54.204 26592 INFO migrate.versioning.api [-] done
  2013-10-13 21:08:54.205 26592 INFO migrate.versioning.api [-] 171 - 172...
  2013-10-13 21:08:54.841 26592 INFO migrate.versioning.api [-] done
  2013-10-13 21:08:54.842 26592 INFO migrate.versioning.api [-] 172 - 173...
  2013-10-13 21:08:54.883 26592 INFO nova.db.sqlalchemy.utils [-] Deleted 
duplicated row with id: 409 from table: key_pairs
  2013-10-13 21:08:54.888 26592 INFO nova.db.sqlalchemy.utils [-] Deleted 
duplicated row with id: 257 from table: key_pairs
  2013-10-13 21:08:54.889 26592 INFO nova.db.sqlalchemy.utils [-] Deleted 
duplicated row with id: 383 from table: key_pairs
  2013-10-13 21:08:54.897 26592 INFO nova.db.sqlalchemy.utils [-] Deleted 
duplicated row with id: 22 from table: key_pairs
  2013-10-13 21:08:54.905 26592 INFO nova.db.sqlalchemy.utils [-] Deleted 
duplicated row with id: 65 from table: key_pairs
  2013-10-13 21:08:54.911 26592 INFO nova.db.sqlalchemy.utils [-] Deleted 
duplicated row with id: 106 from table: key_pairs
  2013-10-13 21:08:54.911 26592 INFO nova.db.sqlalchemy.utils [-] Deleted 
duplicated row with id: 389 from table: key_pairs
  2013-10-13 21:08:54.923 26592 INFO nova.db.sqlalchemy.utils [-] Deleted 
duplicated row with id: 205 from table: key_pairs
  2013-10-13 21:08:54.928 26592 INFO nova.db.sqlalchemy.utils [-] Deleted 
duplicated row with id: 259 from table: key_pairs
  2013-10-13 21:08:54.934 26592 INFO nova.db.sqlalchemy.utils [-] Deleted 
duplicated row with id: 127 from table: key_pairs
  2013-10-13 21:08:54.946 26592 INFO nova.db.sqlalchemy.utils [-] Deleted 
duplicated row with id: 337 from table: key_pairs
  2013-10-13 21:08:54.951 26592 INFO nova.db.sqlalchemy.utils [-] Deleted 
duplicated row with id: 251 from table: key_pairs
  2013-10-13 21:08:54.991 26592 INFO migrate.versioning.api [-] done
  2013-10-13 21:08:54.991 26592 INFO migrate.versioning.api [-] 173 - 174...
  2013-10-13 21:08:55.052 26592 INFO migrate.versioning.api [-] done
  2013-10-13 21:08:55.053 26592 INFO migrate.versioning.api [-] 174 - 175...
  2013-10-13 21:08:55.146 26592 INFO migrate.versioning.api [-] done
  2013-10-13 21:08:55.147 26592 INFO migrate.versioning.api [-] 175 - 176...
  2013-10-13 21:08:55.171 26592 INFO migrate.versioning.api [-] done
  2013-10-13 21:08:55.172 26592 INFO migrate.versioning.api [-] 176 - 177...
  2013-10-13 21:08:55.236 26592 INFO migrate.versioning.api [-] done
  2013-10-13 21:08:55.237 26592 INFO migrate.versioning.api [-] 177 - 178...
  2013-10-13 21:08:55.635 26592 INFO migrate.versioning.api [-] done
  2013-10-13 21:08:55.636 26592 INFO migrate.versioning.api [-] 178 - 179...
  2013-10-13 21:08:55.692 26592 INFO 

[Yahoo-eng-team] [Bug 1384660] Re: Idle rpc traffic with a large number of instances causes failures

2015-03-24 Thread James Page
I believe the optimizations detail in this change have now landed in =
Juno; marking fix released.

** Changed in: cloud-archive
   Status: New = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1384660

Title:
  Idle rpc traffic with a large number of instances causes failures

Status in Ubuntu Cloud Archive:
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  OpenStack Juno (Neutron ML2+OVS/l2pop/neutron security groups), Ubuntu
  14.04

  500 compute node cloud, running 4.5k active instances (can't get it
  any further right now).

  As the number of instances in the cloud increases, the idle loading on
  the neutron-server servers (4 of them all with 4 cores/8 threads and a
  suitable *_worker configuration) increases from nothing to 30;  The db
  call get_port_and_sgs is being serviced around 10 times per second on
  each server at this point. Other things are also happening - I've
  attached the last 1000 lines of the server log with debug enabled.

  The result is that its no longer possible to create new instances, as
  the rpc calls and api thread just don't get onto CPU, resulting in VIF
  plugging timeouts on compute nodes, and ERROR'ed instances.

  ProblemType: Bug
  DistroRelease: Ubuntu 14.04
  Package: neutron-common 1:2014.2-0ubuntu1~cloud0 [origin: Canonical]
  ProcVersionSignature: User Name 3.13.0-35.62-generic 3.13.11.6
  Uname: Linux 3.13.0-35-generic x86_64
  ApportVersion: 2.14.1-0ubuntu3.5
  Architecture: amd64
  CrashDB:
   {
  impl: launchpad,
  project: cloud-archive,
  bug_pattern_url: 
http://people.canonical.com/~ubuntu-archive/bugpatterns/bugpatterns.xml;,
   }
  Date: Thu Oct 23 10:22:14 2014
  PackageArchitecture: all
  SourcePackage: neutron
  UpgradeStatus: No upgrade log present (probably fresh install)
  modified.conffile..etc.neutron.api.paste.ini: [deleted]
  modified.conffile..etc.neutron.fwaas.driver.ini: [deleted]
  modified.conffile..etc.neutron.l3.agent.ini: [deleted]
  modified.conffile..etc.neutron.neutron.conf: [deleted]
  modified.conffile..etc.neutron.policy.json: [deleted]
  modified.conffile..etc.neutron.rootwrap.conf: [deleted]
  modified.conffile..etc.neutron.rootwrap.d.debug.filters: [deleted]
  modified.conffile..etc.neutron.rootwrap.d.ipset.firewall.filters: [deleted]
  modified.conffile..etc.neutron.rootwrap.d.iptables.firewall.filters: [deleted]
  modified.conffile..etc.neutron.rootwrap.d.l3.filters: [deleted]
  modified.conffile..etc.neutron.rootwrap.d.vpnaas.filters: [deleted]
  modified.conffile..etc.neutron.vpn.agent.ini: [deleted]
  modified.conffile..etc.sudoers.d.neutron.sudoers: [deleted]

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1384660/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1228977] Re: n-cpu seems to crash when running with libvirt 1.1.1 from ubuntu cloud archive

2015-03-24 Thread James Page
** Changed in: cloud-archive
   Status: Confirmed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1228977

Title:
  n-cpu seems to crash when running with libvirt 1.1.1 from ubuntu cloud
  archive

Status in Ubuntu Cloud Archive:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in libvirt package in Ubuntu:
  Fix Released
Status in libvirt source package in Saucy:
  Fix Released

Bug description:
  impact
  --

  any concurrent use of libvirt may lockup libirt

  test case
  -

  use libvirt concurrently, specifically in nwfilter + createDomain
  calls. e.g. run devstack-gate against this

  regression potential
  

  upstream stable branch update - should be low

  We experienced a series of jenkins rejects starting overnight on
  Saturday whose root cause is still not quite tracked down yet.
  However, they all have a couple of things in common:

  1) they are the first attempt to use libvir 1.0.6 from havana cloud archive 
for ubuntu precise
  2) the fails are all related to guests not spawning correctly
  3) the n-cpu log just stops about 1/2 way through the tempest log, making my 
suspect that we did something to either lockup or hard crash n-cpu

  After that change went in no devstack/tempest gating project managed
  to merge a change.

  This needs more investigation, but creating this bug to both reverify
  against, as well as track down this issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1228977/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435971] [NEW] NoFilterFound for neutron-keepalived-state-change

2015-03-24 Thread Baodong (Robert) Li
Public bug reported:

When run neutron HA with devstack, neutron-keepalived-state-change
failed to spawn due to the error NoFilterMatched

** Affects: neutron
 Importance: Undecided
 Assignee: Baodong (Robert) Li (baoli)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Baodong (Robert) Li (baoli)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1435971

Title:
  NoFilterFound for neutron-keepalived-state-change

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When run neutron HA with devstack, neutron-keepalived-state-change
  failed to spawn due to the error NoFilterMatched

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1435971/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1287824] Re: l3 agent makes too many individual sudo/ip netns calls

2015-03-24 Thread Carl Baldwin
** Changed in: neutron
   Status: Fix Released = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1287824

Title:
  l3 agent makes too many individual sudo/ip netns calls

Status in OpenStack Neutron (virtual network service):
  Fix Committed

Bug description:
  Basically, calls to sudo, root_wrap, and ip netns exec all add
  overhead that can make these calls very expensive.  Developing an
  effecting way of consolidating these calls in to considerably fewer
  calls will be a big win.  This assumes the mechanism for consolidating
  them does not itself add a lot of overhead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1287824/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1252603] Re: PMTUD needs to be disabled for tunneling to work in many Grizzly environments.

2015-03-24 Thread James Page
** Changed in: cloud-archive
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1252603

Title:
  PMTUD needs to be disabled for tunneling to work in many Grizzly
  environments.

Status in Ubuntu Cloud Archive:
  Won't Fix
Status in OpenStack Neutron (virtual network service):
  Invalid
Status in quantum package in Ubuntu:
  Invalid

Bug description:
  In Grizzly, the version of OVS is lower than 1.9.0. As a result, tunnel path 
MTU
  discovery default value is set to 'enabled'. But internet-wide path MTU 
discovery
  rarely works, we need to add a configuration option to disable tunnel path MTU
  Discovery.

  Discussion about this issue:
Connectivity issue from within the Instances.
http://lists.openstack.org/pipermail/openstack/2013-August/000293.html

  Blog about this issue:
Path MTU discovery and GRE.
http://techbackground.blogspot.com/2013/06/path-mtu-discovery-and-gre.html

  Tunnel Path MTU Discovery default value was set to 'disabled' in OVS 1.9.0.
  Both inheritance of the Don't Fragment bit in IP tunnels (df_inherit) and path
  MTU discovery are no longer supported in OVS 1.10.0. So Linux distributions
  that ship with OVS = 1.9.0 for Havana have no such issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1252603/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1434172] Re: security group create errors without description

2015-03-24 Thread Sean Dague
description is optional, optional does not mean it can be null, it means
it shouldn't be in the payload at all.

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1434172

Title:
  security group create errors without description

Status in OpenStack Compute (Nova):
  Invalid
Status in OpenStack Command Line Client:
  In Progress

Bug description:
  security group create returns an error without --description supplied.
  This appears to be the server rejecting the request so we should set a
  default value rather than sending None.

    $ openstack security group create qaz
    ERROR: openstack Security group description is not a string or unicode 
(HTTP 400) (Request-ID: req-dee03de3-893a-4d58-bc3d-de87d09c3fb8)

  Sent body:

    {security_group: {name: qaz2, description: null}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1434172/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1433309] Re: Libvirt: Detaching volume from instance on host with many attached volumes is very slow

2015-03-24 Thread Sean Dague
Is this reproducable with an open source backend?

** Changed in: nova
   Status: New = Incomplete

** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1433309

Title:
  Libvirt: Detaching volume from instance on host with many attached
  volumes is very slow

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  Incomplete

Bug description:
  When many volumes are attached to instances on the same compute host
  (with multipath enabled), volume detach is very slow and get slower as
  more volumes are attached.

  For example:
  1. compute1 is a compute node with instance1 and instance2. 
  2. instance1 has 10 volumes attached while instance2 has a single volume 
attached. 
  3. Issue a detach for the volume attached to instance2
  4. Nova spends 20 minutes executing the 'multipath -ll' command for every 
device on the hypervisor
  5. Finally the detach completes successfully

  The following log is output in n-cpu many, many times during the detach call. 
Repeated many times for each volume device.
  http://paste.openstack.org/show/192981/

  
  Environment details:
  nova.conf virt driver
  [libvirt]
  iscsi_use_multipath = True
  vif_driver = nova.virt.libvirt.vif.LibvirtGenericVIFDriver
  inject_partition = -2
  live_migration_uri = qemu+ssh://ameade@%s/system
  use_usb_tablet = False
  cpu_mode = none
  virt_type = kvm

  cinder.conf backend
  [eseries]
  volume_backend_name = eseries
  volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
  netapp_storage_family = eseries
  netapp_storage_protocol = iscsi
  netapp_server_hostname = localhost
  netapp_server_port = 8081
  netapp_webservice_path = /devmgr/v2
  netapp_controller_ips = 10.78.152.114,10.78.152.115
  netapp_login = rw
  netapp_password = xx
  netapp_storage_pools = DDP
  use_multipath_for_image_xfer = True
  netapp_sa_password = password
  netapp_enable_multi_attach=True

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1433309/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435924] [NEW] Submit button in Volume Transfer Details does nothing

2015-03-24 Thread Rob Cresswell
Public bug reported:

Steps to reproduce:
1. Create a volume in Project - Compute - Volumes
2. Select Create Transfer in the table dropdown for the Volume you created
3. Click Create Volume Transfer in the modal window
4. Click Submit on the Volume Transfer Details window

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: kilo-rc-potential ux

** Tags added: ux

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1435924

Title:
  Submit button in Volume Transfer Details does nothing

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps to reproduce:
  1. Create a volume in Project - Compute - Volumes
  2. Select Create Transfer in the table dropdown for the Volume you created
  3. Click Create Volume Transfer in the modal window
  4. Click Submit on the Volume Transfer Details window

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1435924/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1210274] Re: InvalidQuotaValue should be 400 or 403 rather than 409

2015-03-24 Thread Rossella Sblendido
Marking this as invalid, looking at the comments in the proposed code
review it seems that 409 is OK after all

** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1210274

Title:
  InvalidQuotaValue should be 400 or 403 rather than 409

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  The neutron.common.exceptions.InvalidQuotaValue exception extends
  Conflict which is a 409 status code but that doesn't really make sense
  for this exception.  I'd say it should be a BadRequest (400) but the
  API docs list possible response codes for quota extension operations
  as 401 or 403:

  http://docs.openstack.org/api/openstack-
  network/2.0/content/Update_Quotas.html

  In this case I'd say it's more of a 403 than a 409.  Regardless, 409
  isn't even in the API doc for the quota extension.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1210274/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1436034] [NEW] connection reset by peer during glance iimage-crete with vcenter backend

2015-03-24 Thread sergiy
Public bug reported:

Environment:
HA with vCenter hypevisor
nova-network vlanmanager
cinder VMwareVcVmdkDriver

api: '1.0'
astute_sha: 16b252d93be6aaa73030b8100cf8c5ca6a970a91
auth_required: true
build_id: 2014-12-26_14-25-46
build_number: '58'
feature_groups:
- mirantis
fuellib_sha: fde8ba5e11a1acaf819d402c645c731af450aff0
fuelmain_sha: 81d38d6f2903b5a8b4bee79ca45a54b76c1361b8
nailgun_sha: 5f91157daa6798ff522ca9f6d34e7e135f150a90
ostf_sha: a9afb68710d809570460c29d6c3293219d3624d4
production: docker
release: '6.0'
release_versions:
  2014.2-6.0:
VERSION:
  api: '1.0'
  astute_sha: 16b252d93be6aaa73030b8100cf8c5ca6a970a91
  build_id: 2014-12-26_14-25-46
  build_number: '58'
  feature_groups:
  - mirantis
  fuellib_sha: fde8ba5e11a1acaf819d402c645c731af450aff0
  fuelmain_sha: 81d38d6f2903b5a8b4bee79ca45a54b76c1361b8
  nailgun_sha: 5f91157daa6798ff522ca9f6d34e7e135f150a90
  ostf_sha: a9afb68710d809570460c29d6c3293219d3624d4
  production: docker
  release: '6.0'


2015-03-24 19:33:14.780 57129 ERROR glance_store._drivers.vmware_datastore 
[a9eaa1a6-d60c-4f62-8abc-7d2a18e7d4b5 35eec72551634331b0c00b6d1b10fc8f 
271797c584e2445d83b82fc57697daaf - - -] Failed to upload content of image 
329a5529-e3d7-478f-b63a-ff33e68b9259
2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore 
Traceback (most recent call last):
2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore   
File 
/usr/lib/python2.6/site-packages/glance_store/_drivers/vmware_datastore.py, 
line 346, in add
2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore 
content=image_file)
2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore   
File 
/usr/lib/python2.6/site-packages/glance_store/_drivers/vmware_datastore.py, 
line 489, in _get_http_conn
2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore 
conn.request(method, url, content, headers)
2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore   
File /usr/lib64/python2.6/httplib.py, line 914, in request
2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore 
self._send_request(method, url, body, headers)
2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore   
File /usr/lib64/python2.6/httplib.py, line 954, in _send_request
2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore 
self.send(body)
2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore   
File /usr/lib64/python2.6/httplib.py, line 756, in send
2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore 
self.sock.sendall(data)
2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore   
File /usr/lib/python2.6/site-packages/eventlet/green/ssl.py, line 137, in 
sendall
2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore 
v = self.send(data[count:])
2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore   
File /usr/lib/python2.6/site-packages/eventlet/green/ssl.py, line 113, in send
2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore 
super(GreenSSLSocket, self).send, data, flags)
2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore   
File /usr/lib/python2.6/site-packages/eventlet/green/ssl.py, line 80, in 
_call_trampolining
2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore 
return func(*a, **kw)
2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore   
File /usr/lib64/python2.6/ssl.py, line 174, in send
2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore 
v = self._sslobj.write(data)
2015-03-24 19:33:14.780 57129 TRACE glance_store._drivers.vmware_datastore 
error: [Errno 104] Connection reset by peer

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: customer-found

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1436034

Title:
  connection reset by peer during glance iimage-crete with vcenter
  backend

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Environment:
  HA with vCenter hypevisor
  nova-network vlanmanager
  cinder VMwareVcVmdkDriver

  api: '1.0'
  astute_sha: 16b252d93be6aaa73030b8100cf8c5ca6a970a91
  auth_required: true
  build_id: 2014-12-26_14-25-46
  build_number: '58'
  feature_groups:
  - mirantis
  fuellib_sha: fde8ba5e11a1acaf819d402c645c731af450aff0
  fuelmain_sha: 81d38d6f2903b5a8b4bee79ca45a54b76c1361b8
  nailgun_sha: 5f91157daa6798ff522ca9f6d34e7e135f150a90
  ostf_sha: a9afb68710d809570460c29d6c3293219d3624d4
  production: docker
  release: '6.0'
  release_versions:
2014.2-6.0:
  VERSION:
api: '1.0'
astute_sha: 

[Yahoo-eng-team] [Bug 1433142] Re: ProcessLauncher should support reloading config file for parent process on receiving SIGHUP

2015-03-24 Thread Elena Ezhova
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) = Elena Ezhova (eezhova)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1433142

Title:
  ProcessLauncher should support reloading config file for parent
  process on receiving SIGHUP

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Confirmed
Status in The Oslo library incubator:
  In Progress

Bug description:
  Currently, when a parent process receives SIGHUP it just sends SIGHUP
  to its children. While children reload their config files and call
  reset on their services, parent continues to run with old config.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1433142/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435265] Re: nova flavor consider Lower case when create and but not in update

2015-03-24 Thread Eli Qiao
this should be related to mysql.
after some investigate.
the model of InstanceTypes has this constraint, but it's not case-sensitive, so 
it consider test1-0 and TEST1-0 same thing.

schema.UniqueConstraint(name, deleted,
name=uniq_instance_types0name0deleted)


** Project changed: nova = oslo.db

** Changed in: oslo.db
 Assignee: Eli Qiao (taget-9) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1435265

Title:
  nova flavor consider Lower case when create and but not in update

Status in Oslo Database library:
  Confirmed

Bug description:

  taget@taget-ThinkStation-P300:~/devstack$ nova flavor-create Test1 100
  ^C11 1 1


  when create/delete a flavor, nova considers the name as lower case letter.
  but it doesn't for update

  for example:

  1. I have a flavor named TEST1

  taget@taget-ThinkStation-P300:~/devstack$ nova flavor-list
  
+--+---+---+--+---+--+---+-+---+
  | ID   | Name  | Memory_MB | Disk 
| Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
  
+--+---+---+--+---+--+---+-+---+
  | 1| m1.tiny   | 512   | 1
| 0 |  | 1 | 1.0 | True  |
  | 101  | TEST1 | 511   | 1
| 0 |  | 1 | 1.0 | True  |
  | 2| m1.small  | 2048  | 20   
| 0 |  | 1 | 1.0 | True  |
  | 3| m1.medium | 4096  | 40   
| 0 |  | 2 | 1.0 | True  |
  | 31eb8c58-2b0a-4892-80d6-ee36d4e64871 | test  | 512   | 3
| 0 |  | 2 | 1.0 | True  |
  | 4| m1.large  | 8192  | 80   
| 0 |  | 4 | 1.0 | True  |
  | 42   | m1.nano   | 64| 0
| 0 |  | 1 | 1.0 | True  |
  | 5| m1.xlarge | 16384 | 160  
| 0 |  | 8 | 1.0 | True  |
  | 78cd18c8-aa73-4cfc-8b01-9fbdad87b61b | controller-flavor | 4096  | 20   
| 5 | 1| 4 | 1.0 | True  |
  | 84   | m1.micro  | 128   | 0
| 0 |  | 1 | 1.0 | True  |
  
+--+---+---+--+---+--+---+-+---+

  2. I can not create a flavor named Test1, nova consider they are same.
  taget@taget-ThinkStation-P300:~/devstack$ nova flavor-create Test1 100  511 1 
1
  ERROR (Conflict): Flavor with name Test1 already exists. (HTTP 409) 
(Request-ID: req-9d3a652b-84c8-4580-96f2-c684a95be5f9)

  3. but When I try to update it by Test1, failed, nova considers it not
  exists.

  taget@taget-ThinkStation-P300:~/devstack$ nova flavor-key Test1 set ram 510
  ERROR (CommandError): No flavor with a name or ID of 'Test1' exists.

To manage notifications about this bug go to:
https://bugs.launchpad.net/oslo.db/+bug/1435265/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435693] [NEW] A number of places where we LOG messages fail to use the _L{X} formatting

2015-03-24 Thread Henry Nash
Public bug reported:

These should be corrected.

** Affects: keystone
 Importance: Low
 Assignee: Henry Nash (henry-nash)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1435693

Title:
  A number of places where we LOG messages fail to use the _L{X}
  formatting

Status in OpenStack Identity (Keystone):
  New

Bug description:
  These should be corrected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1435693/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1403020] Re: Kwarg 'filter_class_names' is never passed to HostManager#get_filtered_hosts

2015-03-24 Thread zhangtralon
** Also affects: cinder
   Importance: Undecided
   Status: New

** No longer affects: cinder

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1403020

Title:
  Kwarg 'filter_class_names' is never passed to
  HostManager#get_filtered_hosts

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  the parameter filter_class_names  from funciton get_filtered_hosts is
  not assigned values, so we always use the filters from
  CONF.scheduler_default_filters.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1403020/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp