[Yahoo-eng-team] [Bug 1551165] Re: delet a vm with macvtap port,port‘s mac did not clean

2016-03-08 Thread Yan Songming
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551165

Title:
  delet a  vm with macvtap port,port‘s mac did not clean

Status in neutron:
  Invalid

Bug description:
  1 neutron  port-create b9ebf2d9-6823-472f-837f-405dc27d01cc  --name bandwidth 
--binding:vnic-type  macvtap
  Created a new port:
  
+-+--+
  | Field   | Value 
   |
  
+-+--+
  | admin_state_up  | True  
   |
  | binding:host_id |   
   |
  | binding:profile | {}
   |
  | binding:vif_details | {}
   |
  | binding:vif_type| unbound   
   |
  | binding:vnic_type   | macvtap   
   |
  | device_id   |

  2  nova boot --flavor m1.small --image
  2a3e422a-0480-4a73-b6b1-2c15e06ed373 --nic port-id=71d9c5a3-aadc-
  4b29-8e52-17d6028d1b0c instance3

  4  nova delete 70900b6f-5e4c-4830-8f89-4df4f0f14cc4
  Request to delete server 70900b6f-5e4c-4830-8f89-4df4f0f14cc4 has been 
accepted.

  237: enp2s0f0:  mtu 1500 qdisc mq state 
DOWN mode DEFAULT qlen 1000
  link/ether 90:e2:ba:aa:45:a8 brd ff:ff:ff:ff:ff:ff
  vf 0 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
  .
  vf 61 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
  vf 62 MAC fa:16:3e:23:b4:10, spoof checking on, link-state auto

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551165/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554412] [NEW] purge command fails due to foreign key constraint

2016-03-08 Thread Abhishek Kekane
Public bug reported:

In our production environment some how we have encountered a issue in
purge command. One of the example is, purge command fails if image is
deleted but members entry for that particular image is not deleted.

Steps to reproduce:

1. Apply patch https://review.openstack.org/#/c/278870/7 (otherwise you will 
get subquery related error)
2. Run purge command "glance-manage db purge 1 1"

Purge command trace:

2016-03-08 08:25:16.127 CRITICAL glance [req-7ce644e9-3c7d-
4cd4-8377-77f7e1a689a5 None None] DBReferenceError:
(pymysql.err.IntegrityError) (1451, u'Cannot delete or update a parent
row: a foreign key constraint fails (`glance`.`image_locations`,
CONSTRAINT `image_locations_ibfk_1` FOREIGN KEY (`image_id`) REFERENCES
`images` (`id`))') [SQL: u'DELETE FROM images WHERE images.id in (SELECT
T1.id FROM (SELECT images.id \nFROM images \nWHERE images.deleted_at <
%(deleted_at_1)s \n LIMIT %(param_1)s) as T1)'] [parameters:
{u'deleted_at_1': datetime.datetime(2016, 3, 7, 8, 25, 16, 92480),
u'param_1': 1}]

2016-03-08 08:25:16.127 TRACE glance Traceback (most recent call last):
2016-03-08 08:25:16.127 TRACE glance   File "/usr/local/bin/glance-manage", 
line 10, in 
2016-03-08 08:25:16.127 TRACE glance sys.exit(main())
2016-03-08 08:25:16.127 TRACE glance   File 
"/opt/stack/glance/glance/cmd/manage.py", line 344, in main
2016-03-08 08:25:16.127 TRACE glance return 
CONF.command.action_fn(*func_args, **func_kwargs)
2016-03-08 08:25:16.127 TRACE glance   File 
"/opt/stack/glance/glance/cmd/manage.py", line 165, in purge
2016-03-08 08:25:16.127 TRACE glance db_api.purge_deleted_rows(ctx, 
age_in_days, max_rows)
2016-03-08 08:25:16.127 TRACE glance   File 
"/opt/stack/glance/glance/db/sqlalchemy/api.py", line 1327, in 
purge_deleted_rows
2016-03-08 08:25:16.127 TRACE glance result = 
session.execute(delete_statement)
2016-03-08 08:25:16.127 TRACE glance   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 1034, 
in execute
2016-03-08 08:25:16.127 TRACE glance bind, 
close_with_result=True).execute(clause, params or {})
2016-03-08 08:25:16.127 TRACE glance   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 914, 
in execute
2016-03-08 08:25:16.127 TRACE glance return meth(self, multiparams, params)
2016-03-08 08:25:16.127 TRACE glance   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/elements.py", line 323, 
in _execute_on_connection
2016-03-08 08:25:16.127 TRACE glance return 
connection._execute_clauseelement(self, multiparams, params)
2016-03-08 08:25:16.127 TRACE glance   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1010, 
in _execute_clauseelement
2016-03-08 08:25:16.127 TRACE glance compiled_sql, distilled_params
2016-03-08 08:25:16.127 TRACE glance   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1146, 
in _execute_context
2016-03-08 08:25:16.127 TRACE glance context)
2016-03-08 08:25:16.127 TRACE glance   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1337, 
in _handle_dbapi_exception
2016-03-08 08:25:16.127 TRACE glance util.raise_from_cause(newraise, 
exc_info)
2016-03-08 08:25:16.127 TRACE glance   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 200, 
in raise_from_cause
2016-03-08 08:25:16.127 TRACE glance reraise(type(exception), exception, 
tb=exc_tb, cause=cause)
2016-03-08 08:25:16.127 TRACE glance   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1139, 
in _execute_context
2016-03-08 08:25:16.127 TRACE glance context)
2016-03-08 08:25:16.127 TRACE glance   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 
450, in do_execute
2016-03-08 08:25:16.127 TRACE glance cursor.execute(statement, parameters)
2016-03-08 08:25:16.127 TRACE glance   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line 146, in 
execute
2016-03-08 08:25:16.127 TRACE glance result = self._query(query)
2016-03-08 08:25:16.127 TRACE glance   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line 296, in _query
2016-03-08 08:25:16.127 TRACE glance conn.query(q)
2016-03-08 08:25:16.127 TRACE glance   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 819, in 
query
2016-03-08 08:25:16.127 TRACE glance self._affected_rows = 
self._read_query_result(unbuffered=unbuffered)
2016-03-08 08:25:16.127 TRACE glance   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 1001, in 
_read_query_result
2016-03-08 08:25:16.127 TRACE glance result.read()
2016-03-08 08:25:16.127 TRACE glance   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 1285, in 
read
2016-03-08 08:25:16.127 TRACE glance first_packet = 
self.connection._read_packet()
2016-03-08 08:25:16.127 TRACE glance   File 
"/usr/loca

[Yahoo-eng-team] [Bug 1554414] [NEW] Avoid calling _get_subnet(s) multiple times in ipam driver

2016-03-08 Thread venkata anil
Public bug reported:

While allocating or updating ips for port, _get_subnet and _get_subents
are called multiple times.

For example, if update_port is called with below fixed_ips
fixed_ips = [{'subnet_id': 'subnet1'},
 {'subnet_id': 'v6_dhcp_stateless_subnet'}
 {'subnet_id': 'v6_slaac_subnet'}
 {'subnet_id': 'v6_pd_subnet'}
 {'subnet_id': 'subnet4', 'ip_address': '30.0.0.3'}
 {'ip_address': '40.0.0.4'}
 {'ip_address': '50.0.0.5'}}
then through _test_fixed_ips_for_port(fixed_ips),  "_get_subnet"[1] is called 4 
times for subnet1, v6_dhcp_stateless_subnet, v6_slaac_subnet, v6_pd_subnet, 
subnet4. "_get_subnets" [2] is called 2 times for ip_address 40.0.0.4 and 
50.0.0.5.


If in case of _test_fixed_ips_for_port called for _allocate_ips_for_port, then 
_get_subnets is already called at[3] (so increase call count). So incase of 
_allocate_ips_for_port, if we save subnets from [3] saved in local variable and 
use that in-memory subnets in further calls, we can avoid above calls to DB.


Sometimes when subnet is updated, update_subnet may trigger 
update_port(fixed_ips)[4] for all ports on the subnet. And in each port's 
fixed_ips, if we have multiple subnets and ip_addresses then _get_subnet and 
_get_subnets will be called for multiple times for each port like above 
example. For example in above case if we have 10 ports on the subnet, then 
update_subnet will result in (10*6=60) 60 times DB access instead of 10 DB 
access. 


When port_update is called for PD subnet, it again calls get_subnet for each 
fixed_ip[5], to check if subnet is PD subnet or not(after get_subnet and 
get_subnets called many times in _test_fixed_ips_for_port).


In all above cases, for _get_subnet and _get_subnets, we are accessing DB many 
times.
 So instead of calling get_subnet or get_subnets for each fixed_ip of a port(at 
multiple places), we can call get_subnets of a network at begining of 
_allocate_ips_for_port(for create port) and _update_ips_for_port(during update) 
and use the in-memory subnets in subsequent private functions.


[1] 
https://github.com/openstack/neutron/blob/master/neutron/db/ipam_backend_mixin.py#L311
[2] 
https://github.com/openstack/neutron/blob/master/neutron/db/ipam_backend_mixin.py#L331
[3] 
https://github.com/openstack/neutron/blob/master/neutron/db/ipam_pluggable_backend.py#L192
[4] 
https://github.com/openstack/neutron/blob/master/neutron/db/db_base_plugin_v2.py#L785
[5] 
https://review.openstack.org/#/c/241227/11/neutron/db/ipam_non_pluggable_backend.py
 Lines 284 and 334.

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New


** Tags: ipv6 l3-ipam-dhcp

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

** Tags added: ipv6 l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554414

Title:
  Avoid calling _get_subnet(s) multiple times in ipam driver

Status in neutron:
  New

Bug description:
  While allocating or updating ips for port, _get_subnet and
  _get_subents are called multiple times.

  For example, if update_port is called with below fixed_ips
  fixed_ips = [{'subnet_id': 'subnet1'},
   {'subnet_id': 'v6_dhcp_stateless_subnet'}
   {'subnet_id': 'v6_slaac_subnet'}
   {'subnet_id': 'v6_pd_subnet'}
   {'subnet_id': 'subnet4', 'ip_address': '30.0.0.3'}
   {'ip_address': '40.0.0.4'}
   {'ip_address': '50.0.0.5'}}
  then through _test_fixed_ips_for_port(fixed_ips),  "_get_subnet"[1] is called 
4 times for subnet1, v6_dhcp_stateless_subnet, v6_slaac_subnet, v6_pd_subnet, 
subnet4. "_get_subnets" [2] is called 2 times for ip_address 40.0.0.4 and 
50.0.0.5.

  
  If in case of _test_fixed_ips_for_port called for _allocate_ips_for_port, 
then _get_subnets is already called at[3] (so increase call count). So incase 
of _allocate_ips_for_port, if we save subnets from [3] saved in local variable 
and use that in-memory subnets in further calls, we can avoid above calls to DB.

  
  Sometimes when subnet is updated, update_subnet may trigger 
update_port(fixed_ips)[4] for all ports on the subnet. And in each port's 
fixed_ips, if we have multiple subnets and ip_addresses then _get_subnet and 
_get_subnets will be called for multiple times for each port like above 
example. For example in above case if we have 10 ports on the subnet, then 
update_subnet will result in (10*6=60) 60 times DB access instead of 10 DB 
access. 

  
  When port_update is called for PD subnet, it again calls get_subnet for each 
fixed_ip[5], to check if subnet is PD subnet or not(after get_subnet and 
get_subnets called many times in _test_fixed_ips_for_port).

  
  In all above cases, for _get_subnet and _get_subnets, we are accessing DB 
many times.
   So instead of calling 

[Yahoo-eng-team] [Bug 1554339] Re: ComputeCapabilitiesFilter failed to work with AggregateInstanceExtraSpecsFilter

2016-03-08 Thread wuhao
** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1554339

Title:
  ComputeCapabilitiesFilter failed to work with
  AggregateInstanceExtraSpecsFilter

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Currently, ComputeCapabilitiesFilter returns false if 'extra_specs'
  can't retrieve in the host state. And,
  AggregateInstanceExtraSpecsFilter also use extra_specs to create new
  instance in a host aggregate. The problem is they failed to select any
  destinations, when the key is in non-scope format. As is shown in,
  
https://github.com/openstack/nova/blob/master/nova/scheduler/filters/compute_capabilities_filter.py#L77.

  For example, we may want to use host aggregate to create a new
  instance in some servers with specified properties. So we create a new
  flavor and set 'extra_specs' to {"property": "balabala"},the
  computecapabilitiesfilter will return false.  In this situation,
  ComputeCapabilitiesFilter failed to work with
  AggregateInstanceExtraSpecsFilter

  I think we'd better check the capabilities only when the key for the
  filter is scope format in capabilities scope (i.e.
  capabilities:xxx:yyy), rather than non-scope format (i.e. no :
  contained).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1554339/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1549726] Re: Race condition in keystone domain config

2016-03-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/287020
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=0b18edab226f6e9dc531febd4eb6f65ccd3c031e
Submitter: Jenkins
Branch:master

commit 0b18edab226f6e9dc531febd4eb6f65ccd3c031e
Author: Divya 
Date:   Wed Mar 2 08:05:42 2016 +0100

Race condition in keystone domain config

This bug fixes a race condition in the domains_config
decorator. The race condition occurs when more than
one thread accesses the decorator. The first thread
sets the configured flag to True before proceeding with
driver load leading the second thread to use the default
driver. This fix ensures that the second thread waits for
the first thread to finish configuration before it uses
the driver.

Change-Id: I0289a4d38e0d30d39c67e29bf77b0a89d1dd23f6
Closes-Bug: 1549726


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1549726

Title:
  Race condition in keystone domain config

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  This is a very difficult to reproduce bug but occurs nevertheless and
  can be observed when we switch the backend drivers.

  Steps to reproduce:
  1. Switch the backened driver in keystone conf file from one driver to 
another . Restart keystone
  2. Immediately (if you wait for few seconds, this cannot be reproduced) , 
make calls that in turn access the keystone methods  /keystone/identity/core >> 
get_group and list_users_in_group. It doesn't have to be exactly these two. It 
can be any two similar methods in identity.core.py which uses the 
@domains_configured decorator.

  
https://github.com/openstack/keystone/blob/master/keystone/identity/core.py#L100

  Invoke two methods that use this decorators and these method
  invocations must be almost parallel. Both the methods hit the
  following flow where the race condition occurs:

  
  def domains_configured(f):
 """Wraps API calls to lazy load domain configs after init.
  """
  @functools.wraps(f)
  def wrapper(self, *args, **kwargs):
  if (not self.domain_configs.configured and
  CONF.identity.domain_specific_drivers_enabled):
  self.domain_configs.setup_domain_drivers(
  self.driver, self.resource_api)
  else:
  LOG.error('domains will not be configured')
  return f(self, *args, **kwargs)
  return wrapper

  def setup_domain_drivers(self, standard_driver, resource_api):
  # This is called by the api call wrapper
 self.configured = True
  self.driver = standard_driver

  .

  When the first call is placed, it sets self.configured to True and
  then proceeds towards loading the driver that corresponds to the
  domain-..However, the second request call assumes the the driver load
  is already complete (purely based on the value set to self.configured
  - which is True even though driver is not really loaded). It thus ends
  up using the default driver (ie driver which is not domain specific )
  and retrieves the values.

  There should be some synchronization logic added inside
  domains_configured (or one of the internal methods) so that incorrect
  backend driver is not used by a request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1549726/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1552978] Re: pecan doesn't work with dhcp agent scheduler extension

2016-03-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/267985
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=665942866940fc474a3b122828ccb2081e975215
Submitter: Jenkins
Branch:master

commit 665942866940fc474a3b122828ccb2081e975215
Author: Brandon Logan 
Date:   Fri Jan 15 01:29:22 2016 -0600

Pecan routing for agent schedulers

For pecan to support existing agent scheduler controllers, a couple of shim
pecan controllers need to be added.  These shim controllers will be used if
there are extensions that have legacy controllers that have not been
registered.  The shim controller is just a passthrough to those legacy
controllers.  This may have the added benefit of support existing out of
tree extensions that have defined their legacy extension controllers the
same way.

Changes to how the router(s) controllers determines whether something
is a member action has been changed a bit to support this.

Closes-Bug: #1552978
Co-Authored-By: Kevin Benton 
Change-Id: Icec56676d83b604c3db3377838076d6429d61e48


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1552978

Title:
  pecan doesn't work with dhcp agent scheduler extension

Status in neutron:
  Fix Released

Bug description:
  pecan can't dispatch requests to extensions that defined their own
  controllers. An example of this is the dhcp agent scheduler extension.
  We need to provide an automatic shim layer to be compatible with these
  old extensions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1552978/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554440] [NEW] Show volume attachment exception does not looks good if server is in shelved_offloaded state

2016-03-08 Thread Ghanshyam Mann
Public bug reported:

In microversion v2.20, Nova allows to attach/detach volume on server in
shelved or shelved_offloaded state.

When server is in shelved_offloaded state, it means server is not on any host. 
And volume attach operation is done directly via cinder. 
In that case device volume mount will be defered until the instance is 
unshelved.  Which is valid case and all fine till here. 

Now issue is if users does the list attachments for server in shelved_offloaded 
state, it return the attachements without 'device' But
Show volume attachment throw exception as there is no mount point for that 
attachment
- 
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/volumes.py#L289

List and show attachments should be consistent for  server in
shelved_offloaded state.

Although error message if volume not mount - "volume_id not found: %s"
does not sounds very nice here for other case too.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1554440

Title:
  Show volume  attachment exception does not looks good if server is in
  shelved_offloaded state

Status in OpenStack Compute (nova):
  New

Bug description:
  In microversion v2.20, Nova allows to attach/detach volume on server
  in shelved or shelved_offloaded state.

  When server is in shelved_offloaded state, it means server is not on any 
host. And volume attach operation is done directly via cinder. 
  In that case device volume mount will be defered until the instance is 
unshelved.  Which is valid case and all fine till here. 

  Now issue is if users does the list attachments for server in 
shelved_offloaded state, it return the attachements without 'device' But
  Show volume attachment throw exception as there is no mount point for that 
attachment
  - 
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/volumes.py#L289

  List and show attachments should be consistent for  server in
  shelved_offloaded state.

  Although error message if volume not mount - "volume_id not found: %s"
  does not sounds very nice here for other case too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1554440/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554452] [NEW] Request release for networking-bgpvpn

2016-03-08 Thread Thomas Morin
Public bug reported:

The networking-bgpvpn project needs a new release in its stable/liberty
branch.

Release Info :

Branch : stable/liberty
Commit-Id : ab3d1c796dfbc853146aec7c57136148c8b33836
New Tag: 3.0.1

** Affects: bgpvpn
 Importance: Undecided
 Status: Confirmed

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: release-subproject

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: bgpvpn
Milestone: 3.next => 3.0.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554452

Title:
   Request release for networking-bgpvpn

Status in bgpvpn:
  Confirmed
Status in neutron:
  New

Bug description:
  The networking-bgpvpn project needs a new release in its
  stable/liberty branch.

  Release Info :

  Branch : stable/liberty
  Commit-Id : ab3d1c796dfbc853146aec7c57136148c8b33836
  New Tag: 3.0.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/bgpvpn/+bug/1554452/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554456] [NEW] Unexpected API error: DBError

2016-03-08 Thread Madhuri Kumari
Public bug reported:

1. Nova Version
commit 6b2e8ed9bd7bd068b23879a60f634ac94cf563ec
Merge: d42961a 897cb7c
Author: Jenkins 
Date:   Tue Mar 8 04:03:14 2016 +

Merge "Fix string interpolations at logging calls"

2. Logs:
Logs: http://paste.openstack.org/show/489654/

** Affects: nova
 Importance: Undecided
 Assignee: Madhuri Kumari (madhuri-rai07)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Madhuri Kumari (madhuri-rai07)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1554456

Title:
  Unexpected API error:  DBError

Status in OpenStack Compute (nova):
  New

Bug description:
  1. Nova Version
  commit 6b2e8ed9bd7bd068b23879a60f634ac94cf563ec
  Merge: d42961a 897cb7c
  Author: Jenkins 
  Date:   Tue Mar 8 04:03:14 2016 +

  Merge "Fix string interpolations at logging calls"

  2. Logs:
  Logs: http://paste.openstack.org/show/489654/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1554456/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554464] [NEW] Radware lbaas v2 driver should not treat listener without default pool

2016-03-08 Thread Evgeny Fedoruk
Public bug reported:

Radware lbaas v2 driver should not consider listener with no default pool.
If pool is deleted and it's the default pool of the listener - do not send 
listener to the back end.

** Affects: neutron
 Importance: Undecided
 Assignee: Evgeny Fedoruk (evgenyf)
 Status: New


** Tags: lbaas radware

** Changed in: neutron
 Assignee: (unassigned) => Evgeny Fedoruk (evgenyf)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554464

Title:
  Radware lbaas v2 driver should not treat listener without default pool

Status in neutron:
  New

Bug description:
  Radware lbaas v2 driver should not consider listener with no default pool.
  If pool is deleted and it's the default pool of the listener - do not send 
listener to the back end.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1554464/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1282956] Re: ML2 : hard reboot a VM after a compute crash

2016-03-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/289458
Committed: 
https://git.openstack.org/cgit/openstack/operations-guide/commit/?id=610019c45496e17a8094d535c856d9de9a16f123
Submitter: Jenkins
Branch:master

commit 610019c45496e17a8094d535c856d9de9a16f123
Author: Andreas Scheuring 
Date:   Mon Mar 7 18:13:18 2016 +0100

Ops: Update Compute Node Failure section with Neutron content

Adding content describing how to fix the Neutron ML2 database in the
case of a total compute node failure.

Change-Id: I4712f604f798c6c9ebb6f9d8d82b63d8ac2ed599
Closes-Bug: #1282956


** Changed in: openstack-manuals
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1282956

Title:
  ML2 : hard reboot a VM after a compute crash

Status in neutron:
  Won't Fix
Status in openstack-manuals:
  Fix Released

Bug description:
  I run in multi node setup with ML2, L2-population and Linuxbridge MD,
  and vxlan TypeDriver.

  I start two compute-nodes, I launch a VM, and I shutdown the compute-
  node which host the VM.

  I use this process to relaunch the VM on the other compute-node :

  http://docs.openstack.org/trunk/openstack-
  ops/content/maintenance.html#totle_compute_node_failure

  Once the VM is launched on the other compute node, fdb entries and
  neighbouring entries are no more populated on the network-node nor on
  the compute node

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1282956/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1448014] Re: Delayed display of floating IPs

2016-03-08 Thread Abhishek vats
** Changed in: nova
   Status: Confirmed => Fix Released

** Description changed:

  Using nova 1:2014.1.4-0ubuntu2 (Icehouse) on Ubuntu 14.04.2 LTS
  
  After associating a floating IP address to an instance in Build/Spawning
  state, 'nova list' and 'nova show' need - per default - a lot of time
  (up to 40 minutes) to display that floating IP.
  
  Steps to reproduce:
  
  * Launching instance via Horizon
  * Associate a floating IP address while instance is in Build/Spawning state 
via Horizon
  
  Expected result:
  
  * 'nova list' and 'nova show' should print the floating IP consistently
  * the floating IP should be part of the related row in 
nova.instance_info_caches database table consistently
  
  Actual result:
  
  * while in Build/Spawning state 'nova list' and 'nova show' displays the 
floating IP address
  * while in Build/Spawning state the floating IP is part of the related row in 
nova.instance_info_caches
  
  * when the instance is switching to Active/Running state, the floating
  IP disappears in 'nova list', 'nova show' and the
  nova.instance_info_caches entry
  
  * a little later (related to heal_instance_info_cache_interval (see
  below)) the floating IP reappears
  
  Side note 1: This issue does not occur, if the floating IP is associated 
after launching (in Active/Running state).
  Side note 2: In Horizon, the floating IP is listed all the time.
  Side note 3: The floating IP is working (ping, ssh), even if not displayed.
  
  Output of 'select * from nova.instance_info_cache':
  
  Instance in Build/Spawning:
  *** 38. row ***
-created_at: 2015-04-24 09:06:23
-updated_at: 2015-04-24 09:06:43
-deleted_at: NULL
-id: 1671
-  network_info: [{"ovs_interfaceid": "b2c284ea-ef23-42e1-9522-b263f24db588", 
"network": {"bridge": "br-int", "subnets": [{"ips": [{"meta": {}, "version": 4, 
"type": "fixed", "floating_ips": [{"meta": {}, "version": 4, "type": 
"floating", "address": "10.0.0.5"}], "address": "192.168.178.212"}], "version": 
4, "meta": {"dhcp_server": "192.168.178.3"}, "dns": [], "routes": [], "cidr": 
"192.168.178.0/24", "gateway": {"meta": {}, "version": 4, "type": "gateway", 
"address": "192.168.178.1"}}], "meta": {"injected": false, "tenant_id": 
"ee8d0dd2202243389179ba2eb5a29e8c"}, "id": 
"276de287-a929-4263-aad5-3b30d6dcc8c9", "label": "neues-netz"}, "devname": 
"tapb2c284ea-ef", "qbh_params": null, "meta": {}, "details": {"port_filter": 
true, "ovs_hybrid_plug": true}, "address": "fa:16:3e:8a:32:19", "active": 
false, "type": "ovs", "id": "b2c284ea-ef23-42e1-9522-b263f24db588", 
"qbg_params": null}]
+    created_at: 2015-04-24 09:06:23
+    updated_at: 2015-04-24 09:06:43
+    deleted_at: NULL
+    id: 1671
+  network_info: [{"ovs_interfaceid": "b2c284ea-ef23-42e1-9522-b263f24db588", 
"network": {"bridge": "br-int", "subnets": [{"ips": [{"meta": {}, "version": 4, 
"type": "fixed", "floating_ips": [{"meta": {}, "version": 4, "type": 
"floating", "address": "10.0.0.5"}], "address": "192.168.178.212"}], "version": 
4, "meta": {"dhcp_server": "192.168.178.3"}, "dns": [], "routes": [], "cidr": 
"192.168.178.0/24", "gateway": {"meta": {}, "version": 4, "type": "gateway", 
"address": "192.168.178.1"}}], "meta": {"injected": false, "tenant_id": 
"ee8d0dd2202243389179ba2eb5a29e8c"}, "id": 
"276de287-a929-4263-aad5-3b30d6dcc8c9", "label": "neues-netz"}, "devname": 
"tapb2c284ea-ef", "qbh_params": null, "meta": {}, "details": {"port_filter": 
true, "ovs_hybrid_plug": true}, "address": "fa:16:3e:8a:32:19", "active": 
false, "type": "ovs", "id": "b2c284ea-ef23-42e1-9522-b263f24db588", 
"qbg_params": null}]
  instance_uuid: f0d22419-1cac-47ce-9063-eee37fad97b9
-   deleted: 0
+   deleted: 0
  
  Instance switches to Active/Running ("floating_ips" becomes empty):
  *** 38. row ***
-created_at: 2015-04-24 09:06:23
-updated_at: 2015-04-24 09:07:04
-deleted_at: NULL
-id: 1671
-  network_info: [{"ovs_interfaceid": "b2c284ea-ef23-42e1-9522-b263f24db588", 
"network": {"bridge": "br-int", "subnets": [{"ips": [{"meta": {}, "version": 4, 
"type": "fixed", "floating_ips": [], "address": "192.168.178.212"}], "version": 
4, "meta": {"dhcp_server": "192.168.178.3"}, "dns": [], "routes": [], "cidr": 
"192.168.178.0/24", "gateway": {"meta": {}, "version": 4, "type": "gateway", 
"address": "192.168.178.1"}}], "meta": {"injected": false, "tenant_id": 
"ee8d0dd2202243389179ba2eb5a29e8c"}, "id": 
"276de287-a929-4263-aad5-3b30d6dcc8c9", "label": "neues-netz"}, "devname": 
"tapb2c284ea-ef", "qbh_params": null, "meta": {}, "details": {"port_filter": 
true, "ovs_hybrid_plug": true}, "address": "fa:16:3e:8a:32:19", "active": 
false, "type": "ovs", "id": "b2c284ea-ef23-42e1-9522-b263f24db588", 
"qbg_params": null}]
+    created_at: 2015-04-24 09:06:23
+    updated_at: 2015-04-24 09:07:04
+    deleted_at: NULL
+    id: 1671
+  network_info:

[Yahoo-eng-team] [Bug 1552056] Re: "router-interface-delete port=xxx" deletes the whole port instead of just removing the interface

2016-03-08 Thread Atsushi SAKAI
** Also affects: openstack-api-site
   Importance: Undecided
   Status: New

** Changed in: openstack-api-site
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1552056

Title:
  "router-interface-delete port=xxx" deletes the whole port instead of
  just removing the interface

Status in neutron:
  In Progress
Status in openstack-api-site:
  Confirmed

Bug description:
  The help message of "neutron router-interface-delete" says

  "Remove an internal network interface from a router."

  =
  Expected behavior
  =
  neutron router-interface-add subnet=subnetx
  --> creates a port, and adds this port as interface to the router

  neutron router-interface-delete subnet=subnetx
  --> removes that interface from the router and deletes the corresponding port

  
  neutron router-interface-add port=portx
  --> adds an already existing port as interface to the router

  neutron router-interface-delete port=portx
  --> removes that interface from the router. Does NOT delete that 
corresponding port

  =
  Actual result
  =

  "neutron router-interface-delete subnet=subnetx" works as expected.

  BUT

  "neutron router-interface-delete port=portx" does not only remove the
  interface from the router, it also deletes the whole port!

  
  =
  Proposed solution
  =

  Either
  #1 Extend the API description to refelct this behavior

  
  Or
  #2 Change the behavior in the special case to NOT delete the port, but only 
remove the interface from the router

  
  ==
  Steps to reproduce
  ==

  # neutron router-interface-delete -h
  usage: neutron router-interface-delete [-h] [--request-format {json}]
 ROUTER INTERFACE

  Remove an internal network interface from a router.

  positional arguments:
ROUTERID or name of the router.
INTERFACE The format is "SUBNET|subnet=SUBNET|port=PORT". Either
  a subnet or port must be specified. Both ID and name
  are accepted as SUBNET or PORT. Note that "subnet="
  can be omitted when specifying a subnet.


  [root@tecs218 ~(keystone_admin)]# neutron router-create test
  Created a new router:
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | external_gateway_info |  |
  | id| 4039bd93-b183-4250-af0f-e9739ac1a19a |
  | name  | test |
  | status| ACTIVE   |
  | tenant_id | 0d76aad1dda94f83a2a0a55c04547434 |
  +---+--+
  [root@tecs218 ~(keystone_admin)]# neutron router-interface-add test  
port=90e0abe1-852b-4cfe-afd9-2bd31a42c279
  Added interface 90e0abe1-852b-4cfe-afd9-2bd31a42c279 to router test.
  [root@tecs218 ~(keystone_admin)]# neutron router-interface-add test  
port=90e0abe1-852b-4cfe-afd9-2bd31a42c27^C
  [root@tecs218 ~(keystone_admin)]# neutron port-show 
90e0abe1-852b-4cfe-afd9-2bd31a42c279
  
+-+---+
  | Field   | Value 
|
  
+-+---+
  | admin_state_up  | True  
|
  | bandwidth   | 0 
|
  | binding:host_id |   
|
  | binding:profile | {}
|
  | binding:vif_details | {}
|
  | binding:vif_type| unbound   
|
  | binding:vnic_type   | normal
|
  | bond| 0 
|
  | cbs | 0 
|
  | device_id   | 4039bd93-b183-4250-af0f-e9739ac1a19a  
|
  | device_owner| network:

[Yahoo-eng-team] [Bug 1554491] [NEW] xenapi: tools/populate_other_config is broken

2016-03-08 Thread Sulochan Acharya
Public bug reported:

https://github.com/openstack/nova/blob/master/tools/xenserver/populate_other_config.py

Needs to be updated ... looks like the code has not been updated since
this is probably not used often, so now its completely broken since its
trying to use xenapi driver function that dont exist anymore.

** Affects: nova
 Importance: Undecided
 Assignee: Sulochan Acharya (sulochan-acharya)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Sulochan Acharya (sulochan-acharya)

** Changed in: nova
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1554491

Title:
  xenapi: tools/populate_other_config is broken

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  
https://github.com/openstack/nova/blob/master/tools/xenserver/populate_other_config.py

  Needs to be updated ... looks like the code has not been updated since
  this is probably not used often, so now its completely broken since
  its trying to use xenapi driver function that dont exist anymore.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1554491/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554513] [NEW] Test settings are importing local/enabled.

2016-03-08 Thread Rob Cresswell
Public bug reported:

The test settings are importing openstack_dashboard.local.enabled. This
shouldn't be imported for tests, as the tests should only be testing
openstack_dashboard, not whatever plugins happen to be installed at the
time.

** Affects: horizon
 Importance: Undecided
 Assignee: Rob Cresswell (robcresswell)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Rob Cresswell (robcresswell)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1554513

Title:
  Test settings are importing local/enabled.

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The test settings are importing openstack_dashboard.local.enabled.
  This shouldn't be imported for tests, as the tests should only be
  testing openstack_dashboard, not whatever plugins happen to be
  installed at the time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1554513/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548724] Re: nova.tests.unit.test_signature_utils.TestSignatureUtils.test_get_certificate fails on slow build server

2016-03-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/288154
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=a7af3258f8d8f4eb4664a126e08b94c85b648e09
Submitter: Jenkins
Branch:master

commit a7af3258f8d8f4eb4664a126e08b94c85b648e09
Author: Matt Riedemann 
Date:   Thu Mar 3 17:06:48 2016 -0500

Extend FakeCryptoCertificate.cert_not_valid_after to 2 hours

Someone reported this test failing on a slow CI system where
it takes 1.5 hours for their stuff to run. It doesn't seem to
matter what our fake mock expiration time is, so bump it to 2
hours.

Change-Id: I0121fe9da5831d6186bf7954271b79a8b3a60eba
Closes-Bug: #1548724


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1548724

Title:
  nova.tests.unit.test_signature_utils.TestSignatureUtils.test_get_certificate
  fails on slow build server

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When I've tried to set up CI build for nova package (13.0b2) but it
  fails on tests:

  ==
  FAIL: 
nova.tests.unit.test_signature_utils.TestSignatureUtils.test_get_certificate
  nova.tests.unit.test_signature_utils.TestSignatureUtils.test_get_certificate
  --
  _StringException: Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/mock/mock.py", line 1305, in patched
  return func(*args, **keywargs)
File "nova/tests/unit/test_signature_utils.py", line 306, in 
test_get_certificate
  signature_utils.get_certificate(None, cert_uuid))
File "nova/signature_utils.py", line 319, in get_certificate
  verify_certificate(certificate)
File "nova/signature_utils.py", line 342, in verify_certificate
  % certificate.not_valid_after)
  nova.exception.SignatureVerificationError: Signature verification for the 
image failed: Certificate is not valid after: 2016-02-22 18:53:41.545721 UTC.

  I believe it happen because our CI server is not that fast and nova
  build-and-test takes about 1.5hr. I propose to extend validity
  interval for mocked certificate.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1548724/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554519] [NEW] seperate device owner flag for HA router interface port

2016-03-08 Thread venkata anil
Public bug reported:

Currently HA router interface port uses DEVICE_OWNER_ROUTER_INTF as
device owner(like normal router interface). So to check if a port is a
HA router interface port, we have to perform a DB operation.


Neutron server at many places(functions in plugin.py, rpc.py, mech_driver.py 
[1]) may need check if a port is HA router interface port and perform different 
set of operations, then it has to access DB for this. Instead if this 
information is available as port's device owner, we can avoid DB access every 
time.


[1] ml2_db.is_ha_port(session, port) in below files
https://review.openstack.org/#/c/282874/3/neutron/plugins/ml2/plugin.py
https://review.openstack.org/#/c/282874/3/neutron/plugins/ml2/drivers/l2pop/mech_driver.py

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554519

Title:
  seperate device owner flag for HA router interface port

Status in neutron:
  New

Bug description:
  Currently HA router interface port uses DEVICE_OWNER_ROUTER_INTF as
  device owner(like normal router interface). So to check if a port is a
  HA router interface port, we have to perform a DB operation.

  
  Neutron server at many places(functions in plugin.py, rpc.py, mech_driver.py 
[1]) may need check if a port is HA router interface port and perform different 
set of operations, then it has to access DB for this. Instead if this 
information is available as port's device owner, we can avoid DB access every 
time.

  
  [1] ml2_db.is_ha_port(session, port) in below files
  https://review.openstack.org/#/c/282874/3/neutron/plugins/ml2/plugin.py
  
https://review.openstack.org/#/c/282874/3/neutron/plugins/ml2/drivers/l2pop/mech_driver.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1554519/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554518] [NEW] stable/liberty periodic unit test job broken

2016-03-08 Thread Ihar Hrachyshka
Public bug reported:

http://logs.openstack.org/periodic-stable/periodic-neutron-
python27-liberty/acbfdc3/console.html#_2016-03-08_06_06_34_694

This is because currently bitrot job templates used for periodic jobs do
not prepare the file.

** Affects: neutron
 Importance: Medium
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: Confirmed

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => Medium

** Changed in: neutron
 Assignee: (unassigned) => Ihar Hrachyshka (ihar-hrachyshka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554518

Title:
  stable/liberty periodic unit test job broken

Status in neutron:
  Confirmed

Bug description:
  http://logs.openstack.org/periodic-stable/periodic-neutron-
  python27-liberty/acbfdc3/console.html#_2016-03-08_06_06_34_694

  This is because currently bitrot job templates used for periodic jobs
  do not prepare the file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1554518/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554522] [NEW] time zone from local_settings.py is not taken into account

2016-03-08 Thread Matthias Runge
Public bug reported:

one can set the time zone in local_settings.py

but it is not taken into account for time zone settings at all.

** Affects: horizon
 Importance: Undecided
 Assignee: Matthias Runge (mrunge)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1554522

Title:
  time zone from local_settings.py is not taken into account

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  one can set the time zone in local_settings.py

  but it is not taken into account for time zone settings at all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1554522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454531] Re: list_user_projects() can't get filtered by 'domain_id'.

2016-03-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/182569
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=3425c1fffe9cb40c759ccec516483e06225d65cd
Submitter: Jenkins
Branch:master

commit 3425c1fffe9cb40c759ccec516483e06225d65cd
Author: darren-wang 
Date:   Wed May 13 16:28:52 2015 +0800

Adding 'domain_id' filter to list_user_projects()

Closes-Bug: #1454531
Change-Id: I01af5376505f49c3c7c1906b7bc9511adb114632


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1454531

Title:
  list_user_projects() can't get filtered by 'domain_id'.

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Here is our use case, we want our tenant domain admin(e.g., Bob) to
  have this capability: Bob(domain-scoped) can list the projects that
  one user has roles on, and the projects Bob get should only belong to
  Bob's scoping domain.

  When we  read the rule in policy.v3cloudsample.json for 
"identity:list_user_projects", we are happy it's the same as what we want:
  {...
  "admin_and_matching_domain_id": "rule:admin_required and 
domain_id:%(domain_id)s",
  "identity:list_user_projects": "rule:owner or 
rule:admin_and_matching_domain_id",
  ...}

  I thought we could use this API with query string 'domain_id', thus
  Bob can and only can query projects in his scoping domain, but it
  doesn't work, since the  @controller.filterprotected('enabled',
  'name')  for list_user_projects() exclude the possibility of taking
  'domain_id' as a query string even it's useful to us and recorded in
  the policy file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1454531/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280084] Re: get trust missing @controller.protected

2016-03-08 Thread David Stanek
Marking as WONTFIX because we can't really add the decorator at this
point.

** Changed in: keystone
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1280084

Title:
  get trust missing @controller.protected

Status in OpenStack Identity (keystone):
  Won't Fix

Bug description:
  Currently, there is no @controller.protected decorator surrounding the 
get_trust function at the trust controller level.
  Since there is an entry for it in our sample policy.json files, it probably 
should be protected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1280084/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501233] Re: DB downgrade doesn't be supported in OpenStack Now

2016-03-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/240451
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=e3366afdfb227794fd46cfd8b2cf553a7ff1a709
Submitter: Jenkins
Branch:master

commit e3366afdfb227794fd46cfd8b2cf553a7ff1a709
Author: wangxiyuan 
Date:   Fri Oct 30 14:54:48 2015 +0800

Add a deprecation warning to the DB downgrade

DB downgrade doesn't be supported in OpenStack now. Glance will
remove it in N release. In M cycle, we should add a warning to users.

Closes-bug: #1501233
Implements: blueprint remove-db-downgrade
Change-Id: I2791d8421abc0ad6c4905d5ddaa3fa99f264e333


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1501233

Title:
  DB downgrade doesn't be supported in OpenStack Now

Status in Glance:
  Fix Released

Bug description:
  As downgrade are not supported after Kilo with OpenStack,  we should
  remove them now.

  Roll backs can be performed as mentioned in the below link:

  http://docs.openstack.org/openstack-ops/content/ops_upgrades-roll-
  back.html

  Glance will remove the downgrade in N release. So we should add a
  warning to users to indicate it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1501233/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553149] Re: Instance in ERROR state due to ConnectFailure with keystone

2016-03-08 Thread Dolph Mathews
Apache will refuse connections that it cannot assign to threads once
MaxClients is exhausted, and if you're only running 10 threads, then I'm
also guessing that your MaxClients is set to be less than the number of
concurrent connections you're throwing at it.

I'm closing this because this is just an Apache tuning issue.

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1553149

Title:
  Instance in ERROR state due to ConnectFailure with keystone

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  When tried to run below rally scenario with concurrency 50, seeing issue with 
keystone. Can someone take a look?
  NOTE: Things will work fine with concurrency 10.

  1. Create tenant, create network. 
  2. Create T1 router and set external network as gateway
  3. Add network created in step 1 to T1 router
  4. Launch instance(on kvm) in the private network and assign FIP. Ping FIP

  
  Setup:

  Single controller(32vCPU, 48GB RAM)
  3 Network Nodes
  100 ESX computes and 100 KVM computes

  Rally reports and logs attached to  bug.

  Logs:

  2016-03-01 01:26:34.699 DEBUG oslo_concurrency.lockutils 
[req-409c8595-d093-4cfe-8b98-b49d2c2accad 
ctx_rally_d6ed151ea67e4b78930c39c406fa64ed_user_0 
ctx_rally_9526f233-a1b9-446b-beb6-d14dc678ff37_tenant_10] Releasing semaphore 
"refresh_cache-8c324106-c6dd-4b90-876d-e3cc33adfebf" from (pid=26585) lock 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:225
  2016-03-01 01:26:34.704 ERROR nova.compute.manager 
[req-409c8595-d093-4cfe-8b98-b49d2c2accad 
ctx_rally_d6ed151ea67e4b78930c39c406fa64ed_user_0 
ctx_rally_9526f233-a1b9-446b-beb6-d14dc678ff37_tenant_10] [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf] Instance failed to spawn
  2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf] Traceback (most recent call last):
  2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2190, in _build_resources
  2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf] yield resources
  2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2036, in _build_and_run_instance
  2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf] block_device_info=block_device_info)
  2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2758, in spawn
  2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf] admin_pass=admin_password)
  2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 3251, in _create_image
  2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf] content=files, extra_md=extra_md, 
network_info=network_info)
  2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf]   File 
"/opt/stack/nova/nova/api/metadata/base.py", line 160, in __init__
  2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf] self.network_metadata = 
netutils.get_network_metadata(network_info)
  2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf]   File 
"/opt/stack/nova/nova/virt/netutils.py", line 185, in get_network_metadata
  2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf] if not network_info:
  2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf]   File 
"/opt/stack/nova/nova/network/model.py", line 526, in __len__
  2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf] return self._sync_wrapper(fn, *args, 
**kwargs)
  2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf]   File 
"/opt/stack/nova/nova/network/model.py", line 513, in _sync_wrapper
  2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf] self.wait()
  2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf]   File 
"/opt/stack/nova/nova/network/model.py", line 545, in wait
  2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf] self[:] = self._gt.wait()
  2016-03-01 01:26:34.704 TRACE n

[Yahoo-eng-team] [Bug 1516946] Re: keystone WSGI fail: ArgsAlreadyParsedError: arguments already parsed: cannot register CLI option

2016-03-08 Thread Dolph Mathews
I've run into this myself.

This is the result of using outdated WSGI startup scripts. As part of
your upgrade process, you must switch to the ones from the release
you're trying to deploy.

This is because keystone has refactored some responsibilities out of
those WSGI scripts, so your scripts are now redundant with what Keystone
is trying to do.

** Changed in: keystone
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1516946

Title:
  keystone WSGI fail: ArgsAlreadyParsedError: arguments already parsed:
  cannot register CLI option

Status in OpenStack Identity (keystone):
  Invalid
Status in puppet-keystone:
  Invalid

Bug description:
  Upgrade from Kilo to Liberty broke Keystone. Was running just fine as
  WSGI on Apache with Kilo. Provisioned a new test cluster using puppet-
  keystone master branch and getting the following error:

  mod_wsgi (pid=28386): Target WSGI script '/usr/lib/cgi-bin/keystone/main' 
cannot be loaded as Python module.
  mod_wsgi (pid=28386): Exception occurred processing WSGI script 
'/usr/lib/cgi-bin/keystone/main'.
  Traceback (most recent call last):
File "/usr/lib/cgi-bin/keystone/main", line 25, in 
  application = wsgi_server.initialize_application(name)
File "/usr/lib/python2.7/dist-packages/keystone/server/wsgi.py", line 51, 
in initialize_application
  common.configure()
File "/usr/lib/python2.7/dist-packages/keystone/server/common.py", line 31, 
in configure
  config.configure()
File "/usr/lib/python2.7/dist-packages/keystone/common/config.py", line 
1200, in configure
  help='Do not monkey-patch threading system modules.'))
File "/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 1824, in 
__inner
  result = f(self, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 1999, in 
register_cli_opt
  raise ArgsAlreadyParsedError("cannot register CLI option")
  ArgsAlreadyParsedError: arguments already parsed: cannot register CLI option

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1516946/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554197] Re: Deleting router-gateway-port throws a DB foreign key error

2016-03-08 Thread Andreas Scheuring
*** This bug is a duplicate of bug 1535707 ***
https://bugs.launchpad.net/bugs/1535707

Marking this as duplicate, as the root cause of both is the same. I'll
expand the other bugs descirption to also cover this issue

** This bug has been marked a duplicate of bug 1535707
   Create router with external network attached doesn't notify l3 agent

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554197

Title:
  Deleting router-gateway-port throws a DB foreign key error

Status in neutron:
  New

Bug description:
  1. High level description:

  Create a external network and associate to a router during creation using 
"--external_gateway_info type=dict network_id=" option.
  This creates a gateway-port. When you attempt to delete this gateway port you 
get a DB foreign key error as follows:

  "DBError: (IntegrityError) (1451, 'Cannot delete or update a pare
  nt row: a foreign key constraint fails (`neutron`.`routers`, CONSTRAINT 
`routers_ibfk_1` FOREIGN KEY (`gw_port_id`) REFERENC
  ES `ports` (`id`))') 'DELETE FROM ports WHERE ports.id = %s' 
('76573418-f9ff-4f2d-8ffb-d0a200f7f1ea',)"

  
  2. Pre-conditions: 

  All resources are created as a admin tenant

  
  3. Step-by-step reproduction steps:

  Steps and error message posted here:
  http://paste.openstack.org/show/489584/

  4. Expected output:

  If this action is not supported then an error message similar to
  following should be thrown:

  "More than one external network exists"

  
  5. Actual output:

  we get "Request Failed: internal server error while processing your
  request."

  With a DB trace on neutron logs

  6. Version:
  Master Branch (Mitaka) (also applicable in liberty/kilo)
  Devstack installation for Mitaka

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1554197/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1341420] Re: gap between scheduler selection and claim causes spurious failures when the instance is the last one to fit

2016-03-08 Thread James Slagle
** Also affects: tripleo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1341420

Title:
  gap between scheduler selection and claim causes spurious failures
  when the instance is the last one to fit

Status in OpenStack Compute (nova):
  In Progress
Status in tripleo:
  New

Bug description:
  There is a race between the scheduler in select_destinations, which
  selects a set of hosts, and the nova compute manager, which claims
  resources on those hosts when building the instance. The race is
  particularly noticable with Ironic, where every request will consume a
  full host, but can turn up on libvirt etc too. Multiple schedulers
  will likely exacerbate this too unless they are in a version of python
  with randomised dictionary ordering, in which case they will make it
  better :).

  I've put https://review.openstack.org/106677 up to remove a comment
  which comes from before we introduced this race.

  One mitigating aspect to the race in the filter scheduler _schedule
  method attempts to randomly select hosts to avoid returning the same
  host in repeated requests, but the default minimum set it selects from
  is size 1 - so when heat requests a single instance, the same
  candidate is chosen every time. Setting that number higher can avoid
  all concurrent requests hitting the same host, but it will still be a
  race, and still likely to fail fairly hard at near-capacity situations
  (e.g. deploying all machines in a cluster with Ironic and Heat).

  Folk wanting to reproduce this: take a decent size cloud - e.g. 5 or
  10 hypervisor hosts (KVM is fine). Deploy up to 1 VM left of capacity
  on each hypervisor. Then deploy a bunch of VMs one at a time but very
  close together - e.g. use the python API to get cached keystone
  credentials, and boot 5 in a loop.

  If using Ironic you will want https://review.openstack.org/106676 to
  let you see which host is being returned from the selection.

  Possible fixes:
   - have the scheduler be a bit smarter about returning hosts - e.g. track 
destination selection counts since the last refresh and weight hosts by that 
count as well
   - reinstate actioning claims into the scheduler, allowing the audit to 
correct any claimed-but-not-started resource counts asynchronously
   - special case the retry behaviour if there are lots of resources available 
elsewhere in the cluster.

  Stats wise, I just testing a 29 instance deployment with ironic and a
  heat stack, with 45 machines to deploy onto (so 45 hosts in the
  scheduler set) and 4 failed with this race - which means they
  recheduled and failed 3 times each - or 12 cases of scheduler racing
  *at minimum*.

  background chat

  15:43 < lifeless> mikal: around? I need to sanity check something
  15:44 < lifeless> ulp, nope, am sure of it. filing a bug.
  15:45 < mikal> lifeless: ok
  15:46 < lifeless> mikal: oh, you're here, I will run it past you :)
  15:46 < lifeless> mikal: if you have ~5m
  15:46 < mikal> Sure
  15:46 < lifeless> so, symptoms
  15:46 < lifeless> nova boot <...> --num-instances 45 -> works fairly 
reliably. Some minor timeout related things to fix but nothing dramatic.
  15:47 < lifeless> heat create-stack <...> with a stack with 45 instances in 
it -> about 50% of instances fail to come up
  15:47 < lifeless> this is with Ironic
  15:47 < mikal> Sure
  15:47 < lifeless> the failure on all the instances is the retry-three-times 
failure-of-death
  15:47 < lifeless> what I believe is happening is this
  15:48 < lifeless> the scheduler is allocating the same weighed list of hosts 
for requests that happen close enough together
  15:49 < lifeless> and I believe its able to do that because the target hosts 
(from select_destinations) need to actually hit the compute node manager and 
have
  15:49 < lifeless> with rt.instance_claim(context, instance, 
limits):
  15:49 < lifeless> happen in _build_and_run_instance
  15:49 < lifeless> before the resource usage is assigned
  15:49 < mikal> Is heat making 45 separate requests to the nova API?
  15:49 < lifeless> eys
  15:49 < lifeless> yes
  15:49 < lifeless> thats the key difference
  15:50 < lifeless> same flavour, same image
  15:50 < openstackgerrit> Sam Morrison proposed a change to openstack/nova: 
Remove cell api overrides for lock and unlock  
https://review.openstack.org/89487
  15:50 < mikal> And you have enough quota for these instances, right?
  15:50 < lifeless> yes
  15:51 < mikal> I'd have to dig deeper to have an answer, but it sure does 
seem worth filing a bug for
  15:51 < lifeless> my theory is that there is enough time between 
select_destinations in the conductor, and _build_and_run_instance in compute 
for another request to come in the front door and be scheduled to the same host
  15:51 < mikal> That seems possible to me
  15:52 < li

[Yahoo-eng-team] [Bug 1554518] Re: stable/liberty periodic unit test job broken

2016-03-08 Thread Ihar Hrachyshka
Should be fixed by https://review.openstack.org/#/c/289918/

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554518

Title:
  stable/liberty periodic unit test job broken

Status in neutron:
  Fix Released

Bug description:
  http://logs.openstack.org/periodic-stable/periodic-neutron-
  python27-liberty/acbfdc3/console.html#_2016-03-08_06_06_34_694

  This is because currently bitrot job templates used for periodic jobs
  do not prepare the file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1554518/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553769] Re: vpnaas: mitaka db migrations are placed in liberty directory

2016-03-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/289055
Committed: 
https://git.openstack.org/cgit/openstack/neutron-vpnaas/commit/?id=1421b1484a77e025e82e5e5c1336d1f708208643
Submitter: Jenkins
Branch:master

commit 1421b1484a77e025e82e5e5c1336d1f708208643
Author: Akihiro Motoki 
Date:   Mon Mar 7 02:48:24 2016 +0900

Move db migration added during Mitaka to proper directory

DB migrations related to VPNaaS endpoint groups and Multiple local
subnets were added during Mitaka dev cycle. They are accidently
placed in liberty directory. We can safely move these files
as db migrations do not depend on file paths.

Closes-Bug: #1553769
Change-Id: I54dd86bb302bed8564430a32012b849f0315d427


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1553769

Title:
  vpnaas: mitaka db migrations are placed in liberty directory

Status in neutron:
  Fix Released

Bug description:
  In vpnaas db migration, we don't have 'mitaka' directory, but actually we 
have mitaka db migrations.
  They should be placed in Mitaka directory.

  http://paste.openstack.org/show/489478/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1553769/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367110] Re: novaclient quota-update should handle tenant names

2016-03-08 Thread Stephen Finucane
I don't think this is a bug, but rather a feature request. novaclient
doesn't say that it will accept tenant names - only tenant IDs:

$ nova help quota-show
usage: nova quota-show [--tenant ] [--user ]

List the quotas for a tenant/user.

Optional arguments:
  --tenant   ID of tenant to list the quotas for.
  --user   ID of user to list the quotas for.

This sounds like a feature request. This functionality is available in
openstackclient and could be "backported", though I think development of
"novaclient" has all but stopped at this point. Making as "invalid"
until someone disagrees with this opinion :)

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367110

Title:
  novaclient quota-update should handle tenant names

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  nova quota-update should either reject tenant_ids which don't match a
  valid uuid or translate tenant names to tenant ids

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367110/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554601] [NEW] able to update health monitor attributes which is attached to pool in lbaas

2016-03-08 Thread abhishek6254
Public bug reported:

Reproduced a bug in Load Balancer:
1.created a pool
2.attached members to pool1
3.then associate health monitor to pool
4.associate VIP to pool
5.when I edit the  attributes of "health monitor" it shows me error like in 
"Error: Failed to update health monitor " but it is updated successfully.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554601

Title:
  able to update health monitor attributes which is attached to pool in
  lbaas

Status in neutron:
  New

Bug description:
  Reproduced a bug in Load Balancer:
  1.created a pool
  2.attached members to pool1
  3.then associate health monitor to pool
  4.associate VIP to pool
  5.when I edit the  attributes of "health monitor" it shows me error like in 
"Error: Failed to update health monitor " but it is updated successfully.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1554601/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1550525] Re: Abort an ongoing live migration

2016-03-08 Thread Andrea Rosa
I added details Ito describe the new feature introduced in nova by the 2.24 
version, please let me know more information are required.
I think that the new feature needs to be documented in the API guide.

** Project changed: nova => openstack-api-site

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1550525

Title:
  Abort an ongoing live migration

Status in openstack-api-site:
  Confirmed

Bug description:
  https://review.openstack.org/277971
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/nova" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit fa002925460e70d988d1b4dd1ea594c680a43740
  Author: Andrea Rosa 
  Date:   Fri Feb 5 08:31:06 2016 +

  Abort an ongoing live migration
  
  This change adds a DELETE call on the server-migrations object to cancel
  a running live migration of a specific instance.
  TO perform the cancellation the virtualization driver needs to support
  it, in case that the feature is not supported we return an error.
  We allow a cancellation of a migration only if the migration is
  running at the moment of the request and if the migration type is equal
  to 'live-migration'.
  In this change we implement this feature for the libvirt driver.
  When the cancellation of a live migration succeeded we rollback the live
  migration and we set the state of the Migration object equals to
  'cancelled'.
  The implementation of this change is based on the work done by the
  implementation of the feature called 'force live migration':
  https://review.openstack.org/245921
  
  DocImpact
  ApiImpact
  
  Implements blueprint: abort-live-migration
  Change-Id: I1ff861e54997a069894b542bd764ac3ef1b3dbb2

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-api-site/+bug/1550525/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543562] Re: mitaka pci_request object needs a migration script for an online data migration

2016-03-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/278079
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=0970f2a90016c09b86e34a53c2b8734a68a6f917
Submitter: Jenkins
Branch:master

commit 0970f2a90016c09b86e34a53c2b8734a68a6f917
Author: Nikola Dipanov 
Date:   Fri Feb 12 21:40:47 2016 +

nova-manage: Declare a PciDevice online migration script

We want to make sure there is a way for operators to migrate all PciDevice
records to the new format. The original online migration code was added
was part of the following change:

https://review.openstack.org/#/c/249015/

Closes-bug: 1543562
Change-Id: I2fc2f9ffac860cf25535abc9b53733bce6ddf345


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1543562

Title:
  mitaka pci_request object needs a migration script for an online data
  migration

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The following change adds an online data migration to the PciDevice
  object.

  https://review.openstack.org/#/c/249015/ (50355c45)

  When we do that we normally want to couple it together with a script
  that will allow operators to run the migration code even for rows that
  do not get accessed and saved during normal operation, as we normally
  drop any compatibility code in the release following the change. This
  is normally done using a nova-manage script, an example of which can
  be seen in the following commit:

  https://review.openstack.org/#/c/135067/

  The above patch did not add such a script and so does not provide
  admins with any tools to make sure their data is updated for the N
  release where we expect the data to be migrated as per our current
  upgrade policy (http://docs.openstack.org/developer/nova/upgrade.html
  #migration-policy)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1543562/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554059] Re: Cleanup the usage of deprecated CONF.network_api_class in hyper-v driver

2016-03-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/289457
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=61a90eaf84e9cad2d8783ab2cfea4575e2dece70
Submitter: Jenkins
Branch:master

commit 61a90eaf84e9cad2d8783ab2cfea4575e2dece70
Author: Sean Dague 
Date:   Mon Mar 7 11:58:57 2016 -0500

Fix hyperv use of deprecated network_api_class

Hyperv driver had some residual use of network_api_class after it was
deprecated in the core. This makes hyperv use nova.network.is_neutron
for selecting its network drivers.

Change-Id: Icfcafd031a793a4713c2997adc5c84bb9c9864fe
Closes-Bug: #1554059


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1554059

Title:
  Cleanup the usage of deprecated CONF.network_api_class in hyper-v
  driver

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  As we already deprecated network_api_class config option in this
  series patches
  https://review.openstack.org/#/q/topic:deprecate_managers

  But in hyper-v driver, there still have a reference to the
  network_api_class config option. We should cleanup it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1554059/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554617] [NEW] Tests for aggregates use getattr() improperly

2016-03-08 Thread Ryan Rossiter
Public bug reported:

Because objects do not raise AttributeError if an attribute is not set,
the default functionality of getattr() cannot be used with objects.
Instead, an 'in' check needs to be used on the object, and if that
passes, I can use getattr(). If that fails, I need to give the default.

** Affects: nova
 Importance: Undecided
 Assignee: Ryan Rossiter (rlrossit)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1554617

Title:
  Tests for aggregates use getattr() improperly

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Because objects do not raise AttributeError if an attribute is not
  set, the default functionality of getattr() cannot be used with
  objects. Instead, an 'in' check needs to be used on the object, and if
  that passes, I can use getattr(). If that fails, I need to give the
  default.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1554617/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554392] Re: Set extra route for DVR might cause error

2016-03-08 Thread Swaminathan Vasudevan
This is a known issue, since the router does not have an external
network interface in the router namespace and if you try to configure an
extra route pointing to the next hop which does not have a corresponding
interface in the router namespace.

This was a descision that we made since we don't want to complicate it
too much, but not adding the external routes in the router namespace and
only add it in the snat_namespace.

** Changed in: neutron
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554392

Title:
  Set extra route for DVR might cause error

Status in neutron:
  Opinion

Bug description:
  With a DVR router. I have 
  external network: 172.24.4.0/24
  internal network: 10.0.0.0/24

  I want to set an extra route for it, so I execute the following
  command:

  neutron router-update router1 --route
  destination=20.0.0.0/24,nexthop=172.24.4.6

  But I get this error at the output of neutron-l3-agent.

  ERROR neutron.agent.linux.utils [-] Exit code: 2; Stdin: ; Stdout: ;
  Stderr: RTNETLINK answers: Network is unreachable

  The reason for it is that the DVR router will set extra route to snat
  and qrouter namespace. However, qrouter namespace will not have the
  route to external network, so error is reported when l3-agent try to
  add a route with nexthop to external network to qroute namespace.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1554392/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554076] Re: neutron_lbaas.tests.unit.drivers.common.test_agent_callbacks.TestLoadBalancerCallbacks fails due to new description field in subnet

2016-03-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/289362
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lbaas/commit/?id=d3386d3e97fa7ff5cf1a0bbb25a6cfed88427d69
Submitter: Jenkins
Branch:master

commit d3386d3e97fa7ff5cf1a0bbb25a6cfed88427d69
Author: Ihar Hrachyshka 
Date:   Mon Mar 7 15:36:38 2016 +0100

Uncouple lbaas object models from neutron core plugin results

Since I6e1ef53d7aae7d04a5485810cc1db0a8eb125953, subnets have
'description' field. We should accommodate for it on lbaas side.

Instead of introducing yet another field for lbaas object, make base
class filter out unknown fields in from_dict() before passing them into
__init__.

Change-Id: Ib00f61cfbc13bf934c31eb476039728830e79e92
Closes-Bug: #1554076


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554076

Title:
  
neutron_lbaas.tests.unit.drivers.common.test_agent_callbacks.TestLoadBalancerCallbacks
  fails due to new description field in subnet

Status in neutron:
  Fix Released

Bug description:
  http://logs.openstack.org/97/288797/5/check/gate-neutron-lbaas-
  python27/55bdb21/testr_results.html.gz

  Traceback (most recent call last):
File "neutron_lbaas/tests/unit/drivers/common/test_agent_callbacks.py", 
line 150, in test_get_loadbalancer_active
  ctx, loadbalancer['loadbalancer']['id']
File "neutron_lbaas/drivers/common/agent_callbacks.py", line 74, in 
get_loadbalancer
  subnet_dict))
File "neutron_lbaas/services/loadbalancer/data_models.py", line 183, in 
from_dict
  return Subnet(**model_dict)
  TypeError: __init__() got an unexpected keyword argument 'description'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1554076/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504876] Re: VMware: unable to soft reboot a VM when there is a soft lockup

2016-03-08 Thread Tracy Jones
** Changed in: nova
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1504876

Title:
  VMware: unable to soft reboot a VM when there is a soft lockup

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  In the event that there is a kernel exception in the guest and a soft
  reboot is invoked, then the instance cannot be rebooted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1504876/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554631] [NEW] Cinder exceptions returned from nova rest api as 500 errors

2016-03-08 Thread Ryan Rossiter
Public bug reported:

When the nova volume API makes calls into the Cinder API using
cinderclient, if cinder raises an exception like Forbidden or OverLimit,
the nova volume api does not catch these exceptions. So they go up to
the nova rest api, resulting in a 500 to be returned.

Here's an example from a tempest test:

Traceback (most recent call last):
  File "/home/ubuntu/tempest/tempest/api/compute/volumes/test_volumes_get.py", 
line 51, in test_volume_create_get_delete
metadata=metadata)['volume']
  File "/home/ubuntu/tempest/tempest/lib/services/compute/volumes_client.py", 
line 55, in create_volume
resp, body = self.post('os-volumes', post_body)
  File "/home/ubuntu/tempest/tempest/lib/common/rest_client.py", line 259, in 
post
return self.request('POST', url, extra_headers, headers, body)
  File "/home/ubuntu/tempest/tempest/lib/common/rest_client.py", line 642, in 
request
resp, resp_body)
  File "/home/ubuntu/tempest/tempest/lib/common/rest_client.py", line 761, in 
_error_checker
message=message)
tempest.lib.exceptions.ServerFault: Got server fault
Details: Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.


The volume API needs to wrap these exceptions and return the nova
equivalent to the rest API so the appropriate return code can be
returned.

** Affects: nova
 Importance: Undecided
 Assignee: Ryan Rossiter (rlrossit)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Ryan Rossiter (rlrossit)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1554631

Title:
  Cinder exceptions returned from nova rest api as 500 errors

Status in OpenStack Compute (nova):
  New

Bug description:
  When the nova volume API makes calls into the Cinder API using
  cinderclient, if cinder raises an exception like Forbidden or
  OverLimit, the nova volume api does not catch these exceptions. So
  they go up to the nova rest api, resulting in a 500 to be returned.

  Here's an example from a tempest test:

  Traceback (most recent call last):
File 
"/home/ubuntu/tempest/tempest/api/compute/volumes/test_volumes_get.py", line 
51, in test_volume_create_get_delete
  metadata=metadata)['volume']
File "/home/ubuntu/tempest/tempest/lib/services/compute/volumes_client.py", 
line 55, in create_volume
  resp, body = self.post('os-volumes', post_body)
File "/home/ubuntu/tempest/tempest/lib/common/rest_client.py", line 259, in 
post
  return self.request('POST', url, extra_headers, headers, body)
File "/home/ubuntu/tempest/tempest/lib/common/rest_client.py", line 642, in 
request
  resp, resp_body)
File "/home/ubuntu/tempest/tempest/lib/common/rest_client.py", line 761, in 
_error_checker
  message=message)
  tempest.lib.exceptions.ServerFault: Got server fault
  Details: Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
  

  The volume API needs to wrap these exceptions and return the nova
  equivalent to the rest API so the appropriate return code can be
  returned.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1554631/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553619] Re: string in nova.service.js cannot control the word order in translations Edit Bug

2016-03-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/288947
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=075d9b3624d33c924a88e0d1b77e87378d770f86
Submitter: Jenkins
Branch:master

commit 075d9b3624d33c924a88e0d1b77e87378d770f86
Author: Akihiro Motoki 
Date:   Sun Mar 6 08:27:49 2016 +0900

Use interpolate in JS to allow translators to control word order

String concatenation is not recommended for better translation.

Change-Id: I56cd604d88693dadc85ec06f4b4220a63151f2ee
Closes-Bug: #1553619


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1553619

Title:
  string in nova.service.js cannot control the word order in
  translations Edit Bug

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  openstack_dashboard/static/app/core/openstack-service-
  api/nova.service.js

return suppressError ? promise : promise.error(function() {
  toastService.add('error', gettext('Unable to delete the flavor with 
id: ') + flavorId);
});

  String concatenation should not be used.
  https://docs.djangoproject.com/en/1.8/topics/i18n/translation/#interpolate

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1553619/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553581] Re: PO file: translator notice is ignored

2016-03-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/289039
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=ce6e34a5e4c8841ff0d8da738339cb374beb0949
Submitter: Jenkins
Branch:master

commit ce6e34a5e4c8841ff0d8da738339cb374beb0949
Author: Akihiro Motoki 
Date:   Mon Mar 7 00:58:00 2016 +0900

Honor comments for translators when extracting messages

To extract commments for translators, '--add-comments' option
must be passed to pybabel. Comments for translators were extracted
when we used Django tools to extract messages, but when we switched
to pybabel it was lost. This commit adds the option to run_tests.sh.

Move the place of existing comments for translators
so that pybabel can find them. Django tool and pybabel
expect a bit different places.
Also add comments to ungettext_lazy in test codes.

Closes-Bug: #1553581
Change-Id: Ia2df36dfebb59bede19d57b2158a907126ca1944


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1553581

Title:
  PO file: translator notice is ignored

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  After Horizon team decided to use pybabel to extract message catalogs,
  "Comments for Translators" [1] are ignored.
  PyBabel does not support "Comments for Translators".
  Is there any solution?

  I see several strings in our test codes are marked as translatable and they 
have no translation comments.
  I tried to add comments for translators, but I failed to do it.

  Pybabel migration almost succeeds but it is one of drawbacks.
  We are now not compatible with Django standard way.

  To raise an attraction, I intentionally target it to Mitaka-RC1.
  Feel free to move it out.

  [1] https://docs.djangoproject.com/es/1.9/topics/i18n/translation
  /#comments-for-translators

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1553581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554227] Re: DHCP unicast requests are not responded to

2016-03-08 Thread Billy Olsen
** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1554227

Title:
  DHCP unicast requests are not responded to

Status in OpenStack Compute (nova):
  New
Status in nova package in Ubuntu:
  New

Bug description:
  Issue:
  We run nova-network in VLAN+multi_host mode on Kilo and notice that only one 
dnsmasq process (either the oldest or newest) on the hypervisor responds to 
unicast BOOTPREQUESTS. dhclient on VMs will retry until it eventually gives up 
and broadcasts the request, which is then responded to. Depending on the timing 
of the DHCP broadcast request, VMs can briefly lose connectivity as they 
attempt rebinding.

  According to
  
http://thekelleys.org.uk/gitweb/?p=dnsmasq.git;a=commitdiff;h=9380ba70d67db6b69f817d8e318de5ba1e990b12,
  it seems that passing "--interface" argument, in addition to "--bind-
  interfaces" is necessary for dnsmasq to work correctly in VLAN mode.

  
  Reproduce steps:
  1. Create two tenants
  2. Create a VM under each tenant, forcing the VMs to run on a single 
hypervisor. I tested with a vanilla Ubuntu cloud image, but any other image 
that uses dhclient should also work.
  3. On the hypervisor, run dhcpdump -i  for each tenant's 
bridge interface. On at least one of them, you should see unicast BOOTPREQUEST 
with no corresponding BOOTPREPLY. dnsmasq will reply when the request 
eventually hits 255.255.255.255.

  
  Nova/Openstack/dnsmasq versions:
  ii  nova-api 1:2015.1.2-0ubuntu2~cloud0   
 all  OpenStack Compute - API frontend
  ii  nova-common  1:2015.1.2-0ubuntu2~cloud0   
 all  OpenStack Compute - common files
  ii  nova-compute 1:2015.1.2-0ubuntu2~cloud0   
 all  OpenStack Compute - compute node base
  ii  nova-compute-libvirt 1:2015.1.2-0ubuntu2~cloud0   
 all  OpenStack Compute - compute node libvirt support
  ii  nova-compute-qemu1:2015.1.2-0ubuntu2~cloud0   
 all  OpenStack Compute - compute node (QEmu)
  ii  nova-network 1:2015.1.2-0ubuntu2~cloud0   
 all  OpenStack Compute - Network manager
  ii  nova-novncproxy  1:2015.1.2-0ubuntu2~cloud0   
 all  OpenStack Compute - NoVNC proxy
  ii  python-nova  1:2015.1.2-0ubuntu2~cloud0   
 all  OpenStack Compute Python libraries
  ii  python-nova-adminclient  0.1.8-0ubuntu2   
 amd64client for administering Openstack Nova
  ii  python-novaclient1:2.22.0-0ubuntu2~cloud0 
 all  client library for OpenStack Compute API
  ii  dnsmasq-base 2.68-1ubuntu0.1  
 amd64Small caching DNS proxy and DHCP/TFTP server
  ii  dnsmasq-utils2.68-1ubuntu0.1  
 amd64Utilities for manipulating DHCP leases

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1554227/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1375408] Re: nova instance delete issue

2016-03-08 Thread Markus Zoeller (markus_z)
** Changed in: nova
   Status: Incomplete => Won't Fix

** Changed in: nova
 Assignee: Padmakanth (padmakanth-chandrapati) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1375408

Title:
  nova instance delete issue

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  I am seeing a behavior, while deleting nova instance it returns status as an 
Error and fails to delete itself. 
  So scenario, where I see this behavior is as follow:
  Create a nova VM instance.
  Create a cinder volume
  Attach this volume to nova vm instance and after that wait for volume to go 
to ‘In-use’ state.
  Mount the created partition to VM
  Unmount the created partition
  Detach the volume from VM instance and after that wait for volume to go to 
‘Available’ state
  Delete the volume
  Delete the nova VM instance ==> It fails at this step, because status of 
server instance is set to Error.

  Based on logs I also see Rabbitmq error, which might explain the problem.
  3908 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit 
Traceback (most r ecent call last):
  3909 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit  
 File "/usr/lib/ 
python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py", line 648, in 
ensure
  3910 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit  
   return method (*args, **kwargs)
  3911 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit  
 File "/usr/lib/ 
python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py", line 753, in 
_publish
  3912 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit  
   publisher = c ls(self.conf, self.channel, topic, **kwargs)
  3913 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit  
 File "/usr/lib/ 
python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py", line 420, in 
__init__
  3914 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit  
   super(NotifyP ublisher, self).__init__(conf, channel, topic, **kwargs)
  3915 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit  
 File "/usr/lib/ 
python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py", line 396, in 
__init__
  3916 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit  
   **options)
  3917 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit  
 File "/usr/lib/ 
python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py", line 339, in 
__init__
  3918 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit  
   self.reconnec t(channel)
  2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit   
File "/usr/lib/ 
python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py", line 423, in 
reconnect
  3920 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit  
   super(NotifyP ublisher, self).reconnect(channel)
  3921 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit  
 File "/usr/lib/ 
python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py", line 347, in 
reconnect
  3922 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit  
   routing_key=s elf.routing_key)
  3923 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit  
 File "/usr/lib/ python2.7/dist-packages/kombu/messaging.py", line 84, in 
__init__
  3924 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit  
   self.revive(s elf._channel)
  3925 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit  
 File "/usr/lib/ python2.7/dist-packages/kombu/messaging.py", line 218, in 
revive
  3926 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit  
   self.declare( )
  3927 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit  
 File "/usr/lib/ python2.7/dist-packages/kombu/messaging.py", line 104, in 
declare
  3928 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit  
   self.exchange .declare()
  3929 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit  
 File "/usr/lib/ python2.7/dist-packages/kombu/entity.py", line 166, in 
declare
  3930 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit  
   nowait=nowait , passive=passive,
  3931 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit  
 File "/usr/lib/ python2.7/dist-packages/amqp/channel.py", line 613, in 
exchange_declare
  3932 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit  
   self._send_me thod((40, 10), args)
  3933 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit  
 File "/usr/lib/ python

[Yahoo-eng-team] [Bug 1349617] Re: SSHException: Error reading SSH protocol banner[Errno 104] Connection reset by peer

2016-03-08 Thread Augustina Ragwitz
Nova: Fix was released for related bug
https://bugs.launchpad.net/nova/+bug/1532809

https://review.openstack.org/#/c/273042/

If this issue rears up again, please open a new bug for Nova.

** Changed in: nova
   Status: Incomplete => Fix Released

** Changed in: nova
 Assignee: (unassigned) => Augustina Ragwitz (auggy)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1349617

Title:
  SSHException: Error reading SSH protocol banner[Errno 104] Connection
  reset by peer

Status in CirrOS:
  Incomplete
Status in grenade:
  Invalid
Status in neutron:
  Incomplete
Status in OpenStack Compute (nova):
  Fix Released
Status in tempest:
  Fix Released

Bug description:
  Noticed a drop in categorized bugs on grenade jobs, so looking at
  latest I see this:

  http://logs.openstack.org/63/108363/5/gate/gate-grenade-dsvm-partial-
  ncpu/1458072/console.html

  Running this query:

  message:"Failed to establish authenticated ssh connection to cirros@"
  AND message:"(Error reading SSH protocol banner[Errno 104] Connection
  reset by peer). Number attempts: 18. Retry after 19 seconds." AND
  tags:"console"

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRmFpbGVkIHRvIGVzdGFibGlzaCBhdXRoZW50aWNhdGVkIHNzaCBjb25uZWN0aW9uIHRvIGNpcnJvc0BcIiBBTkQgbWVzc2FnZTpcIihFcnJvciByZWFkaW5nIFNTSCBwcm90b2NvbCBiYW5uZXJbRXJybm8gMTA0XSBDb25uZWN0aW9uIHJlc2V0IGJ5IHBlZXIpLiBOdW1iZXIgYXR0ZW1wdHM6IDE4LiBSZXRyeSBhZnRlciAxOSBzZWNvbmRzLlwiIEFORCB0YWdzOlwiY29uc29sZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDA2NTkwMTEwMzMyLCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9

  I get 28 hits in 7 days, and it seems to be very particular to grenade
  jobs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cirros/+bug/1349617/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459724] Re: live-migration causes lack of access to instance

2016-03-08 Thread Markus Zoeller (markus_z)
Cleanup
===

This bug report has the status "Incomplete" since more than 30 days
and it looks like that there are no open reviews for it. To keep
the bug list sane, I close this bug with "won't fix". This does not
mean that it is not a valid bug report, it's more to acknowledge that
no progress can be expected here anymore. You are still free to push a
new patch for this bug. If you could reproduce it on the current master
code or on a maintained stable branch, please switch it to "Confirmed".
If you have the information which got asked when this got switched to
"Incomplete", add a comment and switch the report back to "New".

** Changed in: nova
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1459724

Title:
  live-migration causes lack of access to instance

Status in OpenStack Compute (nova):
  Won't Fix
Status in neutron package in Ubuntu:
  Incomplete

Bug description:
  Scenario: Host A and Host B are dedicated compute nodes, separate from the 
controller which contains the neutron networking component.
  1. Spawn an instance on Host A.
  2. Associate a floating ip to it and verify that it can be reached via 
floating ip and namespace.
  3. live-migrate the instance to the Host B.
  4. After migration completes, verify that the instance can be reached 
(ping/ssh floating ip and via namespace).

  Expected Behavior:
  Networking of instance should be working fine (ping/ssh should work).

  Actual Behavior:
  Instance cannot be reached, however, the instance can reach out to everywhere 
it cannot be reached from (using the console is the only access to the 
instance, so somehow its still reached)

  Furthermore, if the instance gets live-migrated back to the original
  host, it can again be reached normally.

  Branch: Juno 2014.2.1-1, python-novaclient 2.20.0-1

  Which logs are needed for this?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1459724/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370496] Re: Failed to establish authenticated ssh connection to cirros - error: [Errno 113] No route to host

2016-03-08 Thread Augustina Ragwitz
This may have been fixed by: https://review.openstack.org/#/c/273042/


I checked the above logstash query and got zero results for the past 30 days.

Closing this as Fix Released. If this issue pops back up, please open a
new bug.

** Changed in: nova
   Status: Incomplete => Fix Released

** Changed in: nova
 Assignee: (unassigned) => Augustina Ragwitz (auggy)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1370496

Title:
  Failed to establish authenticated ssh connection to cirros - error:
  [Errno 113] No route to host

Status in neutron:
  Expired
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Saw this in the gate today, I think it's separate from bug 1349617, at
  least the error is different and the hits in logstash are different.
  The root cause might be the same.

  This is in both nova-network and neutron jobs.

  http://logs.openstack.org/01/116001/3/gate/gate-grenade-dsvm-partial-
  ncpu/3dce5d7/logs/tempest.txt.gz#_2014-09-16_22_32_14_888

  2014-09-16 22:32:14.888 6636 ERROR tempest.common.ssh [-] Failed to establish 
authenticated ssh connection to cirros@172.24.4.4 after 15 attempts
  2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh Traceback (most recent 
call last):
  2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh   File 
"tempest/common/ssh.py", line 76, in _get_ssh_connection
  2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh 
timeout=self.channel_timeout, pkey=self.pkey)
  2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh   File 
"/usr/local/lib/python2.7/dist-packages/paramiko/client.py", line 236, in 
connect
  2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh 
retry_on_signal(lambda: sock.connect(addr))
  2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh   File 
"/usr/local/lib/python2.7/dist-packages/paramiko/util.py", line 278, in 
retry_on_signal
  2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh return function()
  2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh   File 
"/usr/local/lib/python2.7/dist-packages/paramiko/client.py", line 236, in 

  2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh 
retry_on_signal(lambda: sock.connect(addr))
  2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh   File 
"/usr/lib/python2.7/socket.py", line 224, in meth
  2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh return 
getattr(self._sock,name)(*args)
  2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh error: [Errno 113] No 
route to host
  2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh 

  message:"_get_ssh_connection" AND message:"error: [Errno 113] No route
  to host" AND tags:"tempest.txt"

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiX2dldF9zc2hfY29ubmVjdGlvblwiIEFORCBtZXNzYWdlOlwiZXJyb3I6IFtFcnJubyAxMTNdIE5vIHJvdXRlIHRvIGhvc3RcIiBBTkQgdGFnczpcInRlbXBlc3QudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImN1c3RvbSIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJmcm9tIjoiMjAxNC0wOS0xMFQxMjo0ODowMyswMDowMCIsInRvIjoiMjAxNC0wOS0xN1QxMjo0ODowMyswMDowMCIsInVzZXJfaW50ZXJ2YWwiOiIwIn0sInN0YW1wIjoxNDEwOTU4MTU3MTY3LCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9

  63 hits in 10 days, check and gate, all failures.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1370496/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482052] Re: The value(boot_index) for the boot volume is wrong

2016-03-08 Thread Markus Zoeller (markus_z)
It looks like this got fixed meanwhile. At [1] is the description:

"Give each device a unique boot index starting from 0. To disable 
a device from booting, set the boot index to a negative value or 
use the default boot index value, which is None."

Which makes the bug report invalid. If you think this is wrong,
please reopen the report by setting it to "new" and provide details
about the issue.

References:
[1] http://developer.openstack.org/api-ref-compute-v2.1.html#createServer

** Tags added: doc

** Changed in: nova
   Status: Incomplete => Invalid

** Changed in: nova
 Assignee: majianjun (mjjun) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1482052

Title:
  The value(boot_index) for the boot volume is wrong

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Hi, I find a problem about the boot_index value for the  boot volume is not 
right.
  At the 3.4.2.2 of openstack-api-ref.pdf,the description about boot_index is 
as follows:

  boot_index  Indicates a number designating the boot order of the device. Use 
-1
for the boot volume, choose 0 for an attached volume.

  I think that the value of boot volume is shoud be 0.

  Can you help me to confirm it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1482052/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546778] Re: libvirt: resize with deleted backing image fails

2016-03-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/288640
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=db7fd539f261ea53f6c005478049424b9dae1543
Submitter: Jenkins
Branch:master

commit db7fd539f261ea53f6c005478049424b9dae1543
Author: Matthew Booth 
Date:   Fri Mar 4 18:34:21 2016 +

libvirt: Fix resize of instance with deleted glance image

finish_migration() in the libvirt driver was attempting to resize an
image before checking that its backing file was present. This patch
re-orders these 2 operations. In doing so, we also have to resolve an
overloading of the 'disk_info' variable.

Closes-Bug: #1546778

Change-Id: I03e08fae97416ebe5cdedcf238a06d1b90203c5d


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1546778

Title:
  libvirt: resize with deleted backing image fails

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  New

Bug description:
  Once the glance image from which an instance was spawned is deleted,
  resizes of that image fail if they would take place across more than
  one compute node. Migration and live block migration both succeed.

  Resize fails, I believe, because 'qemu-img resize' is called
  
(https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L7218-L7221)
  before the backing image has been transferred from the source compute
  node
  
(https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L7230-L7233).

  Replication requires two compute nodes. To replicate:

  1. Boot an instance from an image or snapshot.
  2. Delete the image from Glance.
  3. Resize the instance. It will fail with an error similar to:

  Stderr: u"qemu-img: Could not open '/var/lib/nova/instances/f77f1c5c-
  71f7-4645-afa1-dd30bacef874/disk': Could not open backing file: Could
  not open
  '/var/lib/nova/instances/_base/ca94b18d94077894f4ccbaafb1881a90225f1224':
  No such file or directory\n"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1546778/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1538227] Re: Failed `nova-manage db sync` returns exitcode of 0

2016-03-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/289308
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=173068d52449eb04e905b29da6e417ae69cd8055
Submitter: Jenkins
Branch:master

commit 173068d52449eb04e905b29da6e417ae69cd8055
Author: Stephen Finucane 
Date:   Mon Mar 7 12:34:47 2016 +

nova-manage: Print, not raise, exceptions

Raising an exception means it is not possible to return a status code.
Seeing as the 'nova-manage' application is an interactive one, it's
viable to fix this by printing the exception rather than raising.

Change-Id: Ifa3f80e4f7dccada439ffc9363e9f1504c8c2da1
Closes-bug: #1538227


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1538227

Title:
  Failed `nova-manage db sync` returns exitcode of 0

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  We're trying to upgrade to liberty using SaltStack 
  (just for context, same issue running the command 
  from a shell).

  At one point `nova-manage db sync` is executed.
  Because of #1511466 we get a *critical* error but
  the command still returns an exitcode of 0.
  In a somewhat POSIX environment this means
  "everything is fine" so the deployment just
  continues with an outdated database schema.

ID: nova-manage db sync
  Function: cmd.run
  Name: nova-manage db sync; sleep 15
Result: True
   Comment: Command "nova-manage db sync; sleep 15" run
   Started: 18:07:09.511897
  Duration: 17747.739 ms
   Changes:   
--
pid:
18000
retcode:
0
stderr:
No handlers could be found for logger "oslo_config.cfg"
2016-01-26 18:07:12.092 18001 DEBUG 
migrate.versioning.repository [-] Loading repository 
/usr/lib/python2.7/dist-packages/nova/db/sql
  alchemy/migrate_repo... __init__ 
/usr/lib/python2.7/dist-packages/migrate/versioning/repository.py:76
2016-01-26 18:07:12.093 18001 DEBUG 
migrate.versioning.script.base [-] Loading script 
/usr/lib/python2.7/dist-packages/nova/db/sqlalc
  hemy/migrate_repo/versions/216_havana.py... __init__ 
/usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:27
  [...]
  [...]
  [...]
2016-01-26 18:07:12.157 18001 INFO migrate.versioning.api 
[-] 290 -> 291... 
2016-01-26 18:07:12.167 18001 CRITICAL nova [-] 
ValidationError: There are still 3 unmigrated flavor records. Migration cannot 
conti$
  ue until all instance flavor records have been migrated to the new format. 
Please run `nova-manage db migrate_flavor_data' first.
2016-01-26 18:07:12.167 18001 ERROR nova Traceback (most 
recent call last):
2016-01-26 18:07:12.167 18001 ERROR nova   File 
"/usr/bin/nova-manage", line 10, in 
2016-01-26 18:07:12.167 18001 ERROR nova 
sys.exit(main())
2016-01-26 18:07:12.167 18001 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/nova/cmd/manage.py", line 1443, in main
2016-01-26 18:07:12.167 18001 ERROR nova ret = 
fn(*fn_args, **fn_kwargs)
2016-01-26 18:07:12.167 18001 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/nova/cmd/manage.py", line 910, in sync
2016-01-26 18:07:12.167 18001 ERROR nova return 
migration.db_sync(version)
2016-01-26 18:07:12.167 18001 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/nova/db/migration.py", line 26, in db_sync
2016-01-26 18:07:12.167 18001 ERROR nova return 
IMPL.db_sync(version=version, database=database)
2016-01-26 18:07:12.167 18001 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/migration.py", line 106, 
in db_$
  ync
2016-01-26 18:07:12.167 18001 ERROR nova version)
2016-01-26 18:07:12.167 18001 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/migrate/versioning/api.py", line 186, in 
upgrade
2016-01-26 18:07:12.167 18001 ERROR nova return 
_migrate(url, repository, version, upgrade=True, err=err, **opts)
2016-01-26 18:07:12.167 18001 ERROR nova   File "", 
line 2, in _migrate
2016-01-26 18:07:12.167 18001 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/migrate/versioning/util/__init__.py", line 
160, in 
  with_engine
2016-01-26 18:07:12.167 18001 ERROR nova return f(*a, 
**kw)
2016-01-26 18:07:12.167 18001 ERROR nova return 
migration.db_sync(version)
2016-01-26 18:07:12.167 18001 ERROR nova   File 
"/usr/lib/python2.7/dis

[Yahoo-eng-team] [Bug 1554617] Re: Tests for aggregates use getattr() improperly

2016-03-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/268712
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=5c51e3c21d2d7820a0a96403d5fdd8dd16d8ed30
Submitter: Jenkins
Branch:master

commit 5c51e3c21d2d7820a0a96403d5fdd8dd16d8ed30
Author: Ryan Rossiter 
Date:   Sun Jan 17 00:50:41 2016 +

Aggregate object fixups

In the original removal of DictCompat from the Aggregate object, the
aggregate.get(foo, default) calls were changed to getattr(aggregate,
foo, default). This is an incorrect change, because the default of
getattr() is only used if AttributeError is raised. The original get()
call would return the default if the variable was not set, but in order
to get the default with getattr() 'in' has to be used. So now, the
getattr() calls are changed to:

foo = aggregate.foo if 'foo' in aggregate else default

Also, a comment was added for using getattr() on all aggregate fields,
explaining we can do the getattr() and not worry about metadata being
unset, because the compute API always sets it. nova.objects.base also
has a helper method to compare the primitives of two objects to check if
they're equal, so a helper method in the tests were changed over to use
that helper method.

Change-Id: Iee651704c90fcdda0938f907924a4565399601d7
Closes-Bug: #1554617


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1554617

Title:
  Tests for aggregates use getattr() improperly

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Because objects do not raise AttributeError if an attribute is not
  set, the default functionality of getattr() cannot be used with
  objects. Instead, an 'in' check needs to be used on the object, and if
  that passes, I can use getattr(). If that fails, I need to give the
  default.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1554617/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554695] [NEW] network not found warnings in test runs

2016-03-08 Thread Kevin Benton
Public bug reported:

A neutron server log from a normal test run is filled with entries like
the following:

2016-03-08 10:08:32.269 18894 WARNING neutron.api.rpc.handlers.dhcp_rpc
[req-a55cec8d-37ee-46b7-97f3-aadf91bcd512 - -] Network
1fd1dfd5-8d24-4016-b8e4-032ec8ef3ce1 could not be found, it might have
been deleted concurrently.


They are completely normal during network creation/deletion so it's not a 
warning condition.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554695

Title:
  network not found warnings in test runs

Status in neutron:
  In Progress

Bug description:
  A neutron server log from a normal test run is filled with entries
  like the following:

  2016-03-08 10:08:32.269 18894 WARNING
  neutron.api.rpc.handlers.dhcp_rpc [req-a55cec8d-37ee-
  46b7-97f3-aadf91bcd512 - -] Network
  1fd1dfd5-8d24-4016-b8e4-032ec8ef3ce1 could not be found, it might have
  been deleted concurrently.

  
  They are completely normal during network creation/deletion so it's not a 
warning condition.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1554695/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554696] [NEW] Neutron server log filled with "device requested by agent not found"

2016-03-08 Thread Kevin Benton
Public bug reported:

The neutron server logs from a normal test run are filled with the
following entries:

2016-03-08 10:07:29.265 18894 WARNING neutron.plugins.ml2.rpc [req-
c5cf3153-b01f-4be7-88f6-730e28fa4d09 - -] Device 91993890-6352-4488
-9e1f-1a419fa17bc1 requested by agent ovs-agent-devstack-trusty-ovh-
bhs1-8619597 not found in database


This occurs whenever an agent requests details about a recently deleted port. 
It's not a valid warning condition.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554696

Title:
  Neutron server log filled with "device requested by agent not found"

Status in neutron:
  In Progress

Bug description:
  The neutron server logs from a normal test run are filled with the
  following entries:

  2016-03-08 10:07:29.265 18894 WARNING neutron.plugins.ml2.rpc [req-
  c5cf3153-b01f-4be7-88f6-730e28fa4d09 - -] Device 91993890-6352-4488
  -9e1f-1a419fa17bc1 requested by agent ovs-agent-devstack-trusty-ovh-
  bhs1-8619597 not found in database

  
  This occurs whenever an agent requests details about a recently deleted port. 
It's not a valid warning condition.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1554696/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464750] Re: Service accounts can be used to login horizon

2016-03-08 Thread Markus Zoeller (markus_z)
This bug report is specifically about the log into Horizon with a
nova service user. That the nova user has the admin rights is
covered in bug 1445199. That the admin role is not properly scoped is
handled in bug 968696. Nova cannot prevent/influence log ins to
Horizon. => Invalid for Nova

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1464750

Title:
  Service accounts can be used to login horizon

Status in OpenStack Dashboard (Horizon):
  Incomplete
Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Security Advisory:
  Won't Fix
Status in OpenStack Security Notes:
  Fix Released

Bug description:
  This is not a bug and may / may not be a security issue ... but it
  appears that the service account created in keystone are of the same
  privileges level as any other admin accounts created through keystone
  and I don't like that.

  Would it be possible to implement something that would distinguish
  user accounts from service accounts?  Is there a way to isolate some
  service accounts from the remaining of the openstack APIs?

  One kick example on this is that any service accounts have admin
  privileges on all the other services .   At this point, I'm trying to
  figure out why are we creating a distinct service account for each
  service if nothing isolate them.

  IE:

  glance account can spawn a VM
  cinder account can delete an image
  heat account can delete a volume
  nova account can create an image

  
  All of these service accounts have access to the horizon dashboard.  One 
small hack could be to prevent those accounts from logging in Horizon.

  Thanks,

  Dave

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1464750/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554728] [NEW] Unable to launch an instance on a network where port-security-enabled=False

2016-03-08 Thread Chirag Shahani
Public bug reported:

Create a network with port-security-enabled=False.
stack@whiskey:~$ neutron net-show n
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| availability_zone_hints   |  |
| availability_zones| nova |
| id| 45a84b0e-6bae-4a05-a0d2-5ec3d43ff5b4 |
| mtu   | 1450 |
| name  | n|
| port_security_enabled | False|
| provider:network_type | vxlan|
| provider:physical_network |  |
| provider:segmentation_id  | 1019 |
| router:external   | False|
| shared| False|
| status| ACTIVE   |
| subnets   | 57fb945b-92d2-4cf3-b7a0-dd43e96b88d5 |
| tenant_id | 96df521a0afe46128044cf6ee20e4843 |
+---+--+

create a subnet under this network

stack@whiskey:~$ neutron subnet-show s
+---+--+
| Field | Value|
+---+--+
| allocation_pools  | {"start": "2.2.2.2", "end": "2.2.2.254"} |
| cidr  | 2.2.2.0/24   |
| dns_nameservers   |  |
| enable_dhcp   | True |
| gateway_ip| 2.2.2.1  |
| host_routes   |  |
| id| 57fb945b-92d2-4cf3-b7a0-dd43e96b88d5 |
| ip_version| 4|
| ipv6_address_mode |  |
| ipv6_ra_mode  |  |
| name  | s|
| network_id| 45a84b0e-6bae-4a05-a0d2-5ec3d43ff5b4 |
| subnetpool_id |  |
| tenant_id | 96df521a0afe46128044cf6ee20e4843 |
+---+--+


Now, create a port under this subnet:

stack@whiskey:~$ neutron port-show p
+---++
| Field | Value 
 |
+---++
| admin_state_up| True  
 |
| allowed_address_pairs |   
 |
| binding:host_id   |   
 |
| binding:profile   | {}
 |
| binding:vif_details   | {}
 |
| binding:vif_type  | unbound   
 |
| binding:vnic_type | normal
 |
| device_id |   
 |
| device_owner  |   
 |
| dns_name  |   
 |
| extra_dhcp_opts   |   
 |
| fixed_ips | {"subnet_id": "57fb945b-92d2-4cf3-b7a0-dd43e96b88d5", 
"ip_address": "2.2.2.3"} |
| id| 33095bd6-3a5c-4ccd-9e4f-046fb7f9272e  
 |
| mac_address   | fa:16:3e:f0:46:ae 
 |
| name  | p 
 |
| network_id| 45a84b0e-6bae-4a05-a0d2-5ec3d43ff5b4  
 |
| port_security_enabled | False 
 |
| security_groups   |   

[Yahoo-eng-team] [Bug 1552830] Re: no module docs generated

2016-03-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/288082
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=b2b0599d515f910c5d1276ab40caaebf96056251
Submitter: Jenkins
Branch:master

commit b2b0599d515f910c5d1276ab40caaebf96056251
Author: Tom Cocozzello 
Date:   Thu Mar 3 13:35:02 2016 -0600

no module docs generated

It looks like in the docs config.py there
needs to be some modifications so the module
docs can be generated through sphinx

Change-Id: I41a2d62a2300100d9fb412698360bb1238cf7406
Closes-Bug: #1552830


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1552830

Title:
  no module docs generated

Status in Glance:
  Fix Released

Bug description:
  It looks like there are no module docs that are being generated by
  sphinx.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1552830/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546708] Re: ng flavor table missing column

2016-03-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/281494
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=767e83dc038fd390cd9f4b624f5544bf695ff6d1
Submitter: Jenkins
Branch:master

commit 767e83dc038fd390cd9f4b624f5544bf695ff6d1
Author: Cindy Lu 
Date:   Wed Feb 17 11:51:33 2016 -0800

Add missing column rxtx_factor to ng flavors table

For consistency with og flavors table.

Change-Id: I8e5b1fd123f0ebdea633f758641869d7e558cfd0
Closes-Bug: #1546708


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1546708

Title:
  ng flavor table missing column

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Added a missing column to the Flavors table, the rx-tx factor.  We
  should add it to ng flavors table for consistency.

  https://review.openstack.org/#/c/247673/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1546708/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554762] [NEW] Neutron fwaas install doc needs to replace the wiki page

2016-03-08 Thread Sean M. Collins
Public bug reported:

It's outdated and refers to DevStack.


https://wiki.openstack.org/wiki/Neutron/FWaaS/HowToInstall

** Affects: neutron
 Importance: Low
 Assignee: Sean M. Collins (scollins)
 Status: Confirmed


** Tags: fwaas

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => Low

** Changed in: neutron
Milestone: None => next

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554762

Title:
  Neutron fwaas install doc needs to replace the wiki page

Status in neutron:
  Confirmed

Bug description:
  It's outdated and refers to DevStack.

  
  https://wiki.openstack.org/wiki/Neutron/FWaaS/HowToInstall

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1554762/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554206] Re: notifications missing virtual_size

2016-03-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/289567
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=3c6d288fab7f0984aa84458f0fdcfde5a1d0c91f
Submitter: Jenkins
Branch:master

commit 3c6d288fab7f0984aa84458f0fdcfde5a1d0c91f
Author: bria4010 
Date:   Mon Mar 7 16:10:29 2016 -0500

Adds virtual_size to notifications

The virtual_size property was added to the v2 API in Icehouse as a
"core" image property [0], but the field was not added to image
notifications.  This patch addresses that oversight.

[0] https://blueprints.launchpad.net/glance/+spec/split-image-size

Change-Id: I423719a88ba1ef17e7475ab5388fb1720a28011e
Closes-bug: #1554206


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1554206

Title:
  notifications missing virtual_size

Status in Glance:
  Fix Released

Bug description:
  The virtual_size property was added to the v2 API in Havana as a
  "core" image property [0], but the field was not added to image
  notifications.

  [0] Change-Id: Ie4f58ee2e4da3a6c1229840295c7f62023a95b70

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1554206/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503595] Re: Firewall remain in active state even after deleting router associated with firewall

2016-03-08 Thread Manjeet Singh Bhatia
i don't see this issue with devstack single node its updating status
properly.

I think its resolved I wanted to mark this invalid but seems like only
supervisor can do that.

either you need to provide more info which branch, how many nodes are
you using. or that is fixed or invalid

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1503595

Title:
  Firewall remain in active state even after deleting router associated
  with firewall

Status in neutron:
  Incomplete

Bug description:
  Steps to Reproduce:
  ==
  1.  Create a  network,subnet,router and add router interface
  2. Create firewall rule
  3. Create firewall Policy with the above firewall rule
  4. Create firewall  with above policy 
  And make above route set to the firewall.

  5. Then delete the  router attached to the firewall and check the
  status of the firewall

  Issue :
Firewall remain in ACTIVE state even though  router id field is blank while 
getting the details of the firewall
  {code}
  stack@stevens-creek:~/firewall$ neutron firewall-list
  
+--+-+--+
  | id   | name| firewall_policy_id 
  |
  
+--+-+--+
  | 71746ed4-4e12-48c6-8db5-31543276058e | user-fw | 
320f68ea-4947-484d-af32-5ead4f368348 |
  
+--+-+--+
  stack@stevens-creek:~/firewall$ neutron firewall-show user-fw
  ++--+
  | Field  | Value|
  ++--+
  | admin_state_up | True |
  | description|  |
  | firewall_policy_id | 320f68ea-4947-484d-af32-5ead4f368348 |
  | id | 71746ed4-4e12-48c6-8db5-31543276058e |
  | name   | user-fw  |
  | router_ids |  |
  | status | ACTIVE   |
  | tenant_id  | 84dc1f66b8b34fb2a48e2dce7031f279 |
  ++--+
  stack@stevens-creek:~/firewall$
  {code}

  Expected :

Firewall  state should change to either pending or error state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1503595/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503595] Re: Firewall remain in active state even after deleting router associated with firewall

2016-03-08 Thread Sean M. Collins
I'll set this to incomplete - to see if the reporter can come back and
verify. There may have been a related fix that resolved this, but was
not linked to this bug.

** Changed in: neutron
   Status: Fix Released => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1503595

Title:
  Firewall remain in active state even after deleting router associated
  with firewall

Status in neutron:
  Incomplete

Bug description:
  Steps to Reproduce:
  ==
  1.  Create a  network,subnet,router and add router interface
  2. Create firewall rule
  3. Create firewall Policy with the above firewall rule
  4. Create firewall  with above policy 
  And make above route set to the firewall.

  5. Then delete the  router attached to the firewall and check the
  status of the firewall

  Issue :
Firewall remain in ACTIVE state even though  router id field is blank while 
getting the details of the firewall
  {code}
  stack@stevens-creek:~/firewall$ neutron firewall-list
  
+--+-+--+
  | id   | name| firewall_policy_id 
  |
  
+--+-+--+
  | 71746ed4-4e12-48c6-8db5-31543276058e | user-fw | 
320f68ea-4947-484d-af32-5ead4f368348 |
  
+--+-+--+
  stack@stevens-creek:~/firewall$ neutron firewall-show user-fw
  ++--+
  | Field  | Value|
  ++--+
  | admin_state_up | True |
  | description|  |
  | firewall_policy_id | 320f68ea-4947-484d-af32-5ead4f368348 |
  | id | 71746ed4-4e12-48c6-8db5-31543276058e |
  | name   | user-fw  |
  | router_ids |  |
  | status | ACTIVE   |
  | tenant_id  | 84dc1f66b8b34fb2a48e2dce7031f279 |
  ++--+
  stack@stevens-creek:~/firewall$
  {code}

  Expected :

Firewall  state should change to either pending or error state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1503595/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554791] [NEW] Horizon should use upper-constraints in testing

2016-03-08 Thread Richard Jones
Public bug reported:

Recently OpenStack introduced a mechanism to specify a constrained
"working set" of packages that are "guaranteed" to produce a working
OpenStack environment. This pinning of packages limits the more broadly-
defined requirements.txt which is managed by global-requirements. Even
though it pins package versions, it is called "upper-constraints".

We should include those constraints in our test runs.

A mechanism exists to allow constraints to be overridden for specific
patches by using Depends-On to a constraints update, allowing testing of
new constraints.

Given enough test coverage in the constraints updates (to be addressed
in a separate patch to the gate jobs for that project) this would have
allowed Horizon to detect that the recent heatclient update broke
Horizon, but also would have allowed the Horizon gate to continue
unaffected by the new heatclient update (because we would have been
pinned to a previous, working version).

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1554791

Title:
  Horizon should use upper-constraints in testing

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Recently OpenStack introduced a mechanism to specify a constrained
  "working set" of packages that are "guaranteed" to produce a working
  OpenStack environment. This pinning of packages limits the more
  broadly-defined requirements.txt which is managed by global-
  requirements. Even though it pins package versions, it is called
  "upper-constraints".

  We should include those constraints in our test runs.

  A mechanism exists to allow constraints to be overridden for specific
  patches by using Depends-On to a constraints update, allowing testing
  of new constraints.

  Given enough test coverage in the constraints updates (to be addressed
  in a separate patch to the gate jobs for that project) this would have
  allowed Horizon to detect that the recent heatclient update broke
  Horizon, but also would have allowed the Horizon gate to continue
  unaffected by the new heatclient update (because we would have been
  pinned to a previous, working version).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1554791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551836] Re: CORS middleware's latent configuration options need to change

2016-03-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/287354
Committed: 
https://git.openstack.org/cgit/openstack/searchlight/commit/?id=8c4ec967bc0e2aa0d7b3d658569aff1f36e6f442
Submitter: Jenkins
Branch:master

commit 8c4ec967bc0e2aa0d7b3d658569aff1f36e6f442
Author: Michael Krotscheck 
Date:   Wed Mar 2 09:57:08 2016 -0800

Moved CORS middleware configuration into oslo-config-generator

The default values needed for searchlight's implementation of cors
middleware have been moved from paste.ini into the configuration
hooks provided by oslo.config. Furthermore, these values have been
added to the default initialization procedure. This ensures
that if a value remains unset in the configuration file, it will
fallback to using sane defaults. It also ensures that an operator
modifying the configuration will be presented with that same
set of defaults.

Change-Id: Iada9cedcdc5b104bf2fa1c68d0d74794c04d1d28
Closes-Bug: 1551836


** Changed in: searchlight
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1551836

Title:
  CORS middleware's latent configuration options need to change

Status in Aodh:
  Fix Released
Status in Barbican:
  In Progress
Status in Ceilometer:
  Fix Released
Status in Cinder:
  In Progress
Status in cloudkitty:
  In Progress
Status in congress:
  In Progress
Status in Cue:
  In Progress
Status in Designate:
  In Progress
Status in Glance:
  Fix Released
Status in heat:
  In Progress
Status in Ironic:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Magnum:
  In Progress
Status in Manila:
  Fix Released
Status in Mistral:
  In Progress
Status in Murano:
  Fix Released
Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.config:
  Fix Released
Status in Sahara:
  Fix Released
Status in OpenStack Search (Searchlight):
  Fix Released
Status in Solum:
  Fix Released
Status in Trove:
  In Progress

Bug description:
  It was pointed out in http://lists.openstack.org/pipermail/openstack-
  dev/2016-February/086746.html that configuration options included in
  paste.ini are less than optimal, because they impose an upgrade burden
  on both operators and engineers. The following discussion expanded to
  all projects (not just those using paste), and the following
  conclusion was reached:

  A) All generated configuration files should contain any headers which the API 
needs to operate. This is currently supported in oslo.config's generate-config 
script, as of 3.7.0
  B) These same configuration headers should be set as defaults for the given 
API, using cfg.set_defaults. This permits an operator to simply activate a 
domain, and not have to worry about tweaking additional settings.
  C) All hardcoded headers should be detached from the CORS middleware.
  D) Configuration and activation of CORS should be consistent across all 
projects.

  It was also agreed that this is a blocking bug for mitaka. A reference
  patch has already been approved for keystone, available here:
  https://review.openstack.org/#/c/285308/

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1551836/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554809] [NEW] func-test test_service_disabled fails

2016-03-08 Thread Ken'ichi Ohmichi
Public bug reported:

http://logs.openstack.org/43/286743/3/check/gate-nova-tox-functional/03715d9
http://logs.openstack.org/42/286742/3/check/gate-nova-tox-functional/0a34837

Traceback (most recent call last):
  File 
"nova/tests/functional/notification_sample_tests/test_service_update.py", line 
44, in test_service_disabled
replacements={'disabled': True})
  File 
"nova/tests/functional/notification_sample_tests/notification_sample_base.py", 
line 92, in _verify_notification
self.assertJsonEqual(sample_obj, notification)
  File "nova/test.py", line 349, in assertJsonEqual
inner_mismatch, verbose=True)
testtools.matchers._impl.MismatchError: Match failed. Matchee: {'event_type': 
u'service.update', 'publisher_id': u'nova-compute:host1', 'priority': 'INFO', 
'payload': {'nova_object.namespace': 'nova', 'nova_object.version': '1.0', 
'nova_object.name': 'ServiceStatusPayload', 'nova_object.data': {'binary': 
u'nova-compute', 'topic': u'compute', 'last_seen_up': '2012-10-29T13:42:05Z', 
'report_count': 1, 'forced_down': False, 'host': u'host1', 'disabled_reason': 
None, 'version': 10, 'disabled': True}}}
Matcher: {u'payload': {u'nova_object.namespace': u'nova', 
u'nova_object.version': u'1.0', u'nova_object.name': u'ServiceStatusPayload', 
u'nova_object.data': {u'binary': u'nova-compute', u'topic': u'compute', 
u'last_seen_up': u'2012-10-29T13:42:05Z', u'report_count': 1, u'forced_down': 
False, u'host': u'host1', u'disabled_reason': None, u'version': 9, u'disabled': 
True}}, u'event_type': u'service.update', u'priority': u'INFO', 
u'publisher_id': u'nova-compute:host1'}
Difference: 9 != 10

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1554809

Title:
  func-test test_service_disabled fails

Status in OpenStack Compute (nova):
  New

Bug description:
  http://logs.openstack.org/43/286743/3/check/gate-nova-tox-functional/03715d9
  http://logs.openstack.org/42/286742/3/check/gate-nova-tox-functional/0a34837

  Traceback (most recent call last):
File 
"nova/tests/functional/notification_sample_tests/test_service_update.py", line 
44, in test_service_disabled
  replacements={'disabled': True})
File 
"nova/tests/functional/notification_sample_tests/notification_sample_base.py", 
line 92, in _verify_notification
  self.assertJsonEqual(sample_obj, notification)
File "nova/test.py", line 349, in assertJsonEqual
  inner_mismatch, verbose=True)
  testtools.matchers._impl.MismatchError: Match failed. Matchee: {'event_type': 
u'service.update', 'publisher_id': u'nova-compute:host1', 'priority': 'INFO', 
'payload': {'nova_object.namespace': 'nova', 'nova_object.version': '1.0', 
'nova_object.name': 'ServiceStatusPayload', 'nova_object.data': {'binary': 
u'nova-compute', 'topic': u'compute', 'last_seen_up': '2012-10-29T13:42:05Z', 
'report_count': 1, 'forced_down': False, 'host': u'host1', 'disabled_reason': 
None, 'version': 10, 'disabled': True}}}
  Matcher: {u'payload': {u'nova_object.namespace': u'nova', 
u'nova_object.version': u'1.0', u'nova_object.name': u'ServiceStatusPayload', 
u'nova_object.data': {u'binary': u'nova-compute', u'topic': u'compute', 
u'last_seen_up': u'2012-10-29T13:42:05Z', u'report_count': 1, u'forced_down': 
False, u'host': u'host1', u'disabled_reason': None, u'version': 9, u'disabled': 
True}}, u'event_type': u'service.update', u'priority': u'INFO', 
u'publisher_id': u'nova-compute:host1'}
  Difference: 9 != 10

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1554809/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554812] [NEW] Branding: Breadcrumb Action Menu Oddity

2016-03-08 Thread Diana Whitten
Public bug reported:

The action menu's that show up in the breadcrumb aren't done in a
dynamic way, it looks odd in other themes:

https://i.imgur.com/vdZDWAm.png

** Affects: horizon
 Importance: Wishlist
 Assignee: Diana Whitten (hurgleburgler)
 Status: Triaged

** Changed in: horizon
 Assignee: (unassigned) => Diana Whitten (hurgleburgler)

** Changed in: horizon
   Importance: Undecided => Wishlist

** Changed in: horizon
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1554812

Title:
  Branding: Breadcrumb Action Menu Oddity

Status in OpenStack Dashboard (Horizon):
  Triaged

Bug description:
  The action menu's that show up in the breadcrumb aren't done in a
  dynamic way, it looks odd in other themes:

  https://i.imgur.com/vdZDWAm.png

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1554812/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553921] Re: Nova API didn't check whether user provide injected file content

2016-03-08 Thread Zhenyu Zheng
** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1553921

Title:
  Nova API didn't check whether user provide injected file content

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  As demonstrated in Nova API ref:
  http://developer.openstack.org/api-ref-compute-v2.1.html#servers-v2.1
  user should encode their injected file content with base64 encoding  before 
use it.
  But yet we are lack of validation of whether it is correctely encoded,
  and it will pass down to nova/compute/manager for decoding [1]. If the 
provided content failed to decode, Base64DecodeError will raise but yet
  not handled, this will also cause the async problem of scheduler and
  compute.

  We should add validate this first in the API layer since in this way we can
  save the cost for performing the rest of booting jobs such as scheduling etc.

  [1]
  
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py#n1910

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1553921/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554825] [NEW] get_isolated_subnets does not return latest results

2016-03-08 Thread Shih-Hao Li
Public bug reported:

In Dnsmasq, the function get_isolated_subnets() returns a list of
subnets in a network if the subnet is not connected to a router.

The implementation of this function checks all the router interface
ports in a cached network object passed from DHCP agent. But the cached
network object is not updated when a subnet is added to or removed from
a router.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554825

Title:
  get_isolated_subnets does not return latest results

Status in neutron:
  New

Bug description:
  In Dnsmasq, the function get_isolated_subnets() returns a list of
  subnets in a network if the subnet is not connected to a router.

  The implementation of this function checks all the router interface
  ports in a cached network object passed from DHCP agent. But the
  cached network object is not updated when a subnet is added to or
  removed from a router.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1554825/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554824] [NEW] Many legacy eslint warnings swamp actual errors

2016-03-08 Thread Richard Jones
Public bug reported:

Horizon's eslint output currently has many hundreds of warnings which
drown out any linting output for new code.

** Affects: horizon
 Importance: Undecided
 Assignee: Richard Jones (r1chardj0n3s)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1554824

Title:
  Many legacy eslint warnings swamp actual errors

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Horizon's eslint output currently has many hundreds of warnings which
  drown out any linting output for new code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1554824/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554827] [NEW] Clean up warnings about loadwsgi.resolve and .require when running nova tests

2016-03-08 Thread Rahul U Nair
Public bug reported:

Fix deprecated warning when running tox in Nova, python2.7/site-
packages/paste/deploy/loadwsgi.py:22: DeprecationWarning: Parameters to
load are deprecated.  Call .resolve and .require separately

** Affects: nova
 Importance: Undecided
 Status: Invalid

** Description changed:

- Fix the deprecated warning when running tox in Nova, python2.7/site-
+ Fix deprecated warning when running tox in Nova, python2.7/site-
  packages/paste/deploy/loadwsgi.py:22: DeprecationWarning: Parameters to
  load are deprecated.  Call .resolve and .require separately

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1554827

Title:
  Clean up warnings about loadwsgi.resolve and .require when running
  nova tests

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Fix deprecated warning when running tox in Nova, python2.7/site-
  packages/paste/deploy/loadwsgi.py:22: DeprecationWarning: Parameters
  to load are deprecated.  Call .resolve and .require separately

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1554827/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554809] Re: func-test test_service_disabled fails

2016-03-08 Thread Ken'ichi Ohmichi
problem seems fixed

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1554809

Title:
  func-test test_service_disabled fails

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  http://logs.openstack.org/43/286743/3/check/gate-nova-tox-functional/03715d9
  http://logs.openstack.org/42/286742/3/check/gate-nova-tox-functional/0a34837

  Traceback (most recent call last):
File 
"nova/tests/functional/notification_sample_tests/test_service_update.py", line 
44, in test_service_disabled
  replacements={'disabled': True})
File 
"nova/tests/functional/notification_sample_tests/notification_sample_base.py", 
line 92, in _verify_notification
  self.assertJsonEqual(sample_obj, notification)
File "nova/test.py", line 349, in assertJsonEqual
  inner_mismatch, verbose=True)
  testtools.matchers._impl.MismatchError: Match failed. Matchee: {'event_type': 
u'service.update', 'publisher_id': u'nova-compute:host1', 'priority': 'INFO', 
'payload': {'nova_object.namespace': 'nova', 'nova_object.version': '1.0', 
'nova_object.name': 'ServiceStatusPayload', 'nova_object.data': {'binary': 
u'nova-compute', 'topic': u'compute', 'last_seen_up': '2012-10-29T13:42:05Z', 
'report_count': 1, 'forced_down': False, 'host': u'host1', 'disabled_reason': 
None, 'version': 10, 'disabled': True}}}
  Matcher: {u'payload': {u'nova_object.namespace': u'nova', 
u'nova_object.version': u'1.0', u'nova_object.name': u'ServiceStatusPayload', 
u'nova_object.data': {u'binary': u'nova-compute', u'topic': u'compute', 
u'last_seen_up': u'2012-10-29T13:42:05Z', u'report_count': 1, u'forced_down': 
False, u'host': u'host1', u'disabled_reason': None, u'version': 9, u'disabled': 
True}}, u'event_type': u'service.update', u'priority': u'INFO', 
u'publisher_id': u'nova-compute:host1'}
  Difference: 9 != 10

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1554809/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1517839] Re: Make CONF.set_override with parameter enforce_type=True by default

2016-03-08 Thread ChangBo Guo(gcb)
** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1517839

Title:
  Make CONF.set_override with parameter enforce_type=True by default

Status in Ceilometer:
  New
Status in Cinder:
  New
Status in cloudkitty:
  Fix Released
Status in Designate:
  Confirmed
Status in Gnocchi:
  In Progress
Status in heat:
  New
Status in OpenStack Identity (keystone):
  Fix Released
Status in Manila:
  New
Status in Murano:
  Fix Released
Status in neutron:
  New
Status in oslo.config:
  New
Status in oslo.messaging:
  Fix Released
Status in Rally:
  Fix Released
Status in watcher:
  Fix Released

Bug description:
  1. Problems :
 oslo_config provides method CONF.set_override[1] , developers usually use 
it to change config option's value in tests. That's convenient .
 By default  parameter enforce_type=False,  it doesn't check any type or 
value of override. If set enforce_type=True , will check parameter
 override's type and value.  In production code(running time code),  
oslo_config  always checks  config option's value.
 In short, we test and run code in different ways. so there's  gap:  config 
option with wrong type or invalid value can pass tests when
 parameter enforce_type = False in consuming projects.  that means some 
invalid or wrong tests are in our code base.
 There is nova POC result when I enable "enforce_type=true" [2],  and I 
must fix them in [3]

 [1] 
https://github.com/openstack/oslo.config/blob/master/oslo_config/cfg.py#L2173
 [2] 
http://logs.openstack.org/16/242416/1/check/gate-nova-python27/97b5eff/testr_results.html.gz
 [3]  https://review.openstack.org/#/c/242416/  
https://review.openstack.org/#/c/242717/  
https://review.openstack.org/#/c/243061/

  2. Proposal 
 1) Make  method CONF.set_override with  enforce_type=True  in consuming 
projects. and  fix violations when  enforce_type=True in each project.

2) Make  method CONF.set_override with  enforce_type=True by default
  in oslo_config

 Hope some one from consuming projects can help make
  enforce_type=True in consuming projects and fix violations,

 You can find more details and comments  in
  https://etherpad.openstack.org/p/enforce_type_true_by_default

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1517839/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554837] [NEW] port of network:router_interface exists even if the subnet updated to no-gateway

2016-03-08 Thread yaowei
Public bug reported:

reproduce steps:
1. create a subnet
2. connect to a router
3. update subnet --no-gateway
4. check port exists with device owner network:router_interface and fixed ip 
was the subnet gateway ip
5. check in router ns qr-xxx has ip address gateway ip

this bug was found in master branch, but I think it also affects
previous version.

fix solutions:
when update subnet --no-gateway:
1. check and router interface delete
2. delete gateway port
3. update dhcp configure

** Affects: neutron
 Importance: Undecided
 Assignee: yaowei (yaowei)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => yaowei (yaowei)

** Description changed:

  reproduce steps:
  1. create a subnet
  2. connect to a router
  3. update subnet --no-gateway
  4. check port exists with device owner network:router_interface and fixed ip 
was the subnet gateway ip
  5. check in router ns qr-xxx has ip address gateway ip
  
- this problem was found in master branch, but I think it also previous
- version.
+ this bug was found in master branch, but I think it also affects
+ previous version.
  
  fix solutions:
  when update subnet --no-gateway:
  1. check and router interface delete
  2. delete gateway port
  3. update dhcp configure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554837

Title:
  port of network:router_interface exists even if the subnet updated to
  no-gateway

Status in neutron:
  New

Bug description:
  reproduce steps:
  1. create a subnet
  2. connect to a router
  3. update subnet --no-gateway
  4. check port exists with device owner network:router_interface and fixed ip 
was the subnet gateway ip
  5. check in router ns qr-xxx has ip address gateway ip

  this bug was found in master branch, but I think it also affects
  previous version.

  fix solutions:
  when update subnet --no-gateway:
  1. check and router interface delete
  2. delete gateway port
  3. update dhcp configure

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1554837/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1536785] Re: The Bootstrap Developer Theme Preview has some terms that are not being translated

2016-03-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/271453
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=76f8e93228d22bbb7dca882b18f4816753eb8015
Submitter: Jenkins
Branch:master

commit 76f8e93228d22bbb7dca882b18f4816753eb8015
Author: Lucas Palm 
Date:   Thu Jan 21 16:22:48 2016 -0600

Fixed a few missed translation strings on Theme preview page

Under the Developer Dashboard, the Bootstrap Theme Preview has a few
strings that are not being translated when they could and should be.
Many of the strings are missing the "translate" directive to enable the
translation, while others include the directive but are not being translated
because of other issues.  The translation directive seems not to work when
there is another HTML element tag nested inside the outer element containing
the translation.  As such is the case with the Dropdown elements.  The
solution to this is to use the translate filter in situations where the
directive will not work.

This change fixes most of the translation issues on this page.

Note: The pseudo translation tool was used to determine what strings are
correctly being translated or not.

Change-Id: I5aeb9f6212f5ba6a1f4cde0b72cce638937b7138
Closes-Bug: #1536785


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1536785

Title:
  The Bootstrap Developer Theme Preview has some terms that are not
  being translated

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Under the Developer Dashboard, the Bootstrap Theme Preview has a few
  strings that are not being translated when they could and should be.

  See the attached image for details.

  (Note: The image does not contain all occurrences on the page, just a
  few to get the point across.  There definitely is more.)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1536785/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554859] [NEW] in-used volume can't be migrated

2016-03-08 Thread xueyingchen(zte)
Public bug reported:

1. boot instance from image (create a new volume) , Instance spawned
successfully

2. migrate the volume which instance is using

3.the volume migrate failed

4.check error infomation in nova-compute.log :

2016-03-07 16:17:43.218 9817 INFO nova.virt.libvirt.iscsiscrub 
[req-8039f9e7-9d4d-48dc-9e2a-dd6073a954b0 8b34e1ab75024fcba0ea69a6fd0937c3 
181a578bc97642f2b9e153bec622f130 - - -] scrub devices: by-path no residual 
devices
2016-03-07 16:17:48.718 9817 WARNING nova.virt.libvirt.volume 
[req-8039f9e7-9d4d-48dc-9e2a-dd6073a954b0 8b34e1ab75024fcba0ea69a6fd0937c3 
181a578bc97642f2b9e153bec622f130 - - -] ISCSI volume not yet found at: vda. 
Will rescan & retry.  Try number: 0
2016-03-07 16:17:49.503 9817 ERROR nova.compute.manager 
[req-8039f9e7-9d4d-48dc-9e2a-dd6073a954b0 8b34e1ab75024fcba0ea69a6fd0937c3 
181a578bc97642f2b9e153bec622f130 - - -] [instance: 
303abfcb-b78b-4891-848f-b1f490ac69ab] Failed to swap volume 
333df485-406c-4c51-92d1-5ba0aebfeb0d for 241abd7b-31fb-4763-86e7-3eda59744f09
2016-03-07 16:17:49.503 9817 TRACE nova.compute.manager [instance: 
303abfcb-b78b-4891-848f-b1f490ac69ab] Traceback (most recent call last):
2016-03-07 16:17:49.503 9817 TRACE nova.compute.manager [instance: 
303abfcb-b78b-4891-848f-b1f490ac69ab]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5914, in 
_swap_volume
2016-03-07 16:17:49.503 9817 TRACE nova.compute.manager [instance: 
303abfcb-b78b-4891-848f-b1f490ac69ab] resize_to)
2016-03-07 16:17:49.503 9817 TRACE nova.compute.manager [instance: 
303abfcb-b78b-4891-848f-b1f490ac69ab]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1238, in 
swap_volume
2016-03-07 16:17:49.503 9817 TRACE nova.compute.manager [instance: 
303abfcb-b78b-4891-848f-b1f490ac69ab] driver_bdm = 
driver_block_device.DriverVolumeBlockDevice(bdm)
2016-03-07 16:17:49.503 9817 TRACE nova.compute.manager [instance: 
303abfcb-b78b-4891-848f-b1f490ac69ab]   File 
"/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 102, in 
__init__
2016-03-07 16:17:49.503 9817 TRACE nova.compute.manager [instance: 
303abfcb-b78b-4891-848f-b1f490ac69ab] self._transform()
2016-03-07 16:17:49.503 9817 TRACE nova.compute.manager [instance: 
303abfcb-b78b-4891-848f-b1f490ac69ab]   File 
"/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 204, in 
_transform
2016-03-07 16:17:49.503 9817 TRACE nova.compute.manager [instance: 
303abfcb-b78b-4891-848f-b1f490ac69ab] raise _InvalidType
2016-03-07 16:17:49.503 9817 TRACE nova.compute.manager [instance: 
303abfcb-b78b-4891-848f-b1f490ac69ab] _InvalidType
2016-03-07 16:17:49.503 9817 TRACE nova.compute.manager [instance: 
303abfcb-b78b-4891-848f-b1f490ac69ab] 
2016-03-07 16:17:49.771 9817 INFO nova.compute.manager 
[req-8039f9e7-9d4d-48dc-9e2a-dd6073a954b0 8b34e1ab75024fcba0ea69a6fd0937c3 
181a578bc97642f2b9e153bec622f130 - - -] [instance: 
303abfcb-b78b-4891-848f-b1f490ac69ab] swap_volume: calling Cinder 
terminate_connection for 241abd7b-31fb-4763-86e7-3eda59744f09
2016-03-07 16:17:50.775 9817 INFO nova.compute.manager 
[req-8039f9e7-9d4d-48dc-9e2a-dd6073a954b0 8b34e1ab75024fcba0ea69a6fd0937c3 
181a578bc97642f2b9e153bec622f130 - - -] [instance: 
303abfcb-b78b-4891-848f-b1f490ac69ab] swap_volume:calling Cinder 
terminate_connection end for: 241abd7b-31fb-4763-86e7-3eda59744f09
2016-03-07 16:17:51.239 9817 INFO nova.compute.manager 
[req-8039f9e7-9d4d-48dc-9e2a-dd6073a954b0 8b34e1ab75024fcba0ea69a6fd0937c3 
181a578bc97642f2b9e153bec622f130 - - -] [instance: 
303abfcb-b78b-4891-848f-b1f490ac69ab] swap_volume: Cinder 
migrate_volume_completion returned: {u'save_volume_id': 
u'333df485-406c-4c51-92d1-5ba0aebfeb0d'}
2016-03-07 16:17:51.392 9817 INFO nova.scheduler.client.report 
[req-8039f9e7-9d4d-48dc-9e2a-dd6073a954b0 8b34e1ab75024fcba0ea69a6fd0937c3 
181a578bc97642f2b9e153bec622f130 - - -] Compute_service record updated for 
('2C5_10_DELL05', '2C5_10_DELL05')
2016-03-07 16:17:51.394 9817 ERROR oslo_messaging.rpc.dispatcher 
[req-8039f9e7-9d4d-48dc-9e2a-dd6073a954b0 8b34e1ab75024fcba0ea69a6fd0937c3 
181a578bc97642f2b9e153bec622f130 - - -] Exception during message handling: 
2016-03-07 16:17:51.394 9817 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2016-03-07 16:17:51.394 9817 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
2016-03-07 16:17:51.394 9817 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
2016-03-07 16:17:51.394 9817 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
2016-03-07 16:17:51.394 9817 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
2016-03-07 16:17:51.394 9817 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
2016-03-07 16:

[Yahoo-eng-team] [Bug 1489291] Re: [RFE] Add tags to neutron resources

2016-03-08 Thread Armando Migliaccio
@Hirofumi, coding for this effort looks complete to me. Can you confirm?

** Changed in: neutron
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489291

Title:
  [RFE] Add tags to neutron resources

Status in neutron:
  Fix Released

Bug description:
  In most popular REST API interfaces, objects in the domain model can be
  "tagged" with zero or more simple strings. These strings may then be used
  to group and categorize objects in the domain model.

  Neutron resources in current DB model do not contain any tags, and
  dont have a generic consistent way to add tags or/and any other data
  by the user.
  Adding tags to resources can be useful for management and
  orchestration in OpenStack, if its done in the API level
  and IS NOT backend specific data.

  The following use cases refer to adding tags to networks, but the same
  can be applicable to any other Neutron resource (core resource and router):

  1) Ability to map different networks in different OpenStack locations
 to one logically same network (for Multi site OpenStack)

  2) Ability to map Id's from different management/orchestration systems to
 OpenStack networks in mixed environments, for example for project Kuryr,
  map docker network id to neutron network id

  3) Leverage tags by deployment tools

  spec : https://review.openstack.org/#/c/216021/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489291/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1549068] Re: Ironic vif_port_id mismatch with neutron port id if specified multi networks

2016-03-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/284025
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=e147a63cb396ebaadd7c841a305cf3a52be1d373
Submitter: Jenkins
Branch:master

commit e147a63cb396ebaadd7c841a305cf3a52be1d373
Author: Zhenguo Niu 
Date:   Wed Feb 24 17:14:59 2016 +0800

[Ironic]Match vif-pif mac address before setting 'vif_port_id'

When booting an ironic instance with multi networks, the ironic port
'vif_port_id' may mismatch with the corresponding neutron port.

Closes-Bug: #1549068
Co-Authored-By: Sivaramakrishna Garimella (sivaramakrishna.garime...@hp.com)

Change-Id: Id5f033136283987eef8de4ce2f6be256e48cdbf8


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1549068

Title:
  Ironic vif_port_id mismatch with neutron port id if specified multi
  networks

Status in Ironic:
  Won't Fix
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  when booting an ironic instance with multi networks, we create the
  network resources with the macs from ironic node ports, and then set
  the neutron port id back to ironic port's extra/vif_port_id, but the
  current logic will lead the vif-pif ids pair mismatch.

  code:

  for vif, pif in zip(network_info, ports):
    port_id = six.text_type(vif['id'])
    patch = [{'op': 'add',
     'path': '/extra/vif_port_id',
     'value':  port_id}]
    self.ironicclient.call("port.update", pif.uuid, patch)

  we should check whether the mac addresses match between vif and pif
  before setting the 'vif_port_id'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1549068/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554869] [NEW] [RFE]Add error codes for HTTP response

2016-03-08 Thread xiexs
Public bug reported:

Error codes with more detailed information about the errors will
be quite useful for the developers when using an neutron API.

Thus we should add a wrapper to collect the error codes and rearrange them
with a appropriate design pattern [1] while the REST API has encountered some 
errors,
and then return the request users.

[1] http://specs.openstack.org/openstack/api-wg/guidelines/errors.html

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554869

Title:
  [RFE]Add error codes for HTTP response

Status in neutron:
  New

Bug description:
  Error codes with more detailed information about the errors will
  be quite useful for the developers when using an neutron API.

  Thus we should add a wrapper to collect the error codes and rearrange them
  with a appropriate design pattern [1] while the REST API has encountered some 
errors,
  and then return the request users.

  [1] http://specs.openstack.org/openstack/api-wg/guidelines/errors.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1554869/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554153] Re: Updating a port without dns_name parameter clears dns_name when dns-integration is enabled

2016-03-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/289551
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=58e9102bff7280b36bcd24a55b7403d13ce68041
Submitter: Jenkins
Branch:master

commit 58e9102bff7280b36bcd24a55b7403d13ce68041
Author: James Anziano 
Date:   Mon Mar 7 19:42:47 2016 +

Only clear dns_name when user specifies parameter

Previously dns_name would be set to empty string as long as the
dns-integration extension was enabled and the user didn't specify a
different value. This meant that dns_name would be cleared even if the
user didn't pass it as a parameter at all. This patch only clears
dns_name if the user passed that parameter without specifying a value.

Change-Id: I1be9a2f9c3dc9850cc167b204506e36d5272d642
Closes-Bug: 1554153


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554153

Title:
  Updating a port without dns_name parameter clears dns_name when dns-
  integration is enabled

Status in neutron:
  Fix Released

Bug description:
  If the dns-integration extension is enabled, updating a port without 
explicitly specifying a dns_name parameter will clear the dns_name field.
   
  To reproduce:
  Make sure your environment is configured to use the dns-integration extension.
  Have a port with a dns_name already set.
  Run neutron port-update my-port --name=my-new-port
  The command should complete successfully, but if you now run neutron 
port-show my-new-port, you will see the dns_name field is empty in addition to 
any other changes you requested.
   
  DevStack all-in-one built from master
  Perceived severity: medium

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1554153/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554151] Re: update_port failure across server restart

2016-03-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/289526
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=4f04102e5769930b03b9f28616c7734b74bbe868
Submitter: Jenkins
Branch:master

commit 4f04102e5769930b03b9f28616c7734b74bbe868
Author: James Anziano 
Date:   Mon Mar 7 19:08:39 2016 +

Ensures DNS_DRIVER is loaded before it is checked

Previously it was possible for DNS_DRIVER to be checked here before anything
had attempted to load it, causing the check to erroneuously fail. This patch
makes sure that the check will not fail simply because nothing had loaded it
prior by attempting to load it immediately before the check.

Change-Id: I34537beaf675db2634493dfef27b69051a8d0781
Closes-Bug: 1554151


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554151

Title:
  update_port failure across server restart

Status in neutron:
  Fix Released

Bug description:
  Steps to reproduce:
  Make sure your environment is configured to use the dns-integration extension.
  Have an existing port created. Restart the server. It might be helpful first 
to insert a line into the beginning of the  process_update_port method in 
neutron/neutron/plugins/ml2/extensions/dns_integration.py that prints out the 
DNS_DRIVER variable. It will be None the first time this method is called, 
afterwards it will correctly be an instance of your DNS driver object.
  After restarting the server, run neutron port-update my-port with any 
arguments.
  While the behavior is the same regardless of the argument, the bug only 
becomes a problem if the arguments are relevant to the DNS extension, such as 
dns_name or updating the IP address.

  The command will claim to have completed successfully, but the DNS
  driver is not loaded until the end of the process, after it has been
  used. Certain functions will check to make sure the DNS driver has
  been loaded and will exit silently and prematurely because it hasn't
  been loaded yet. Any subsequent port-update commands will be fine
  because the driver is now loaded until the server gets restarted
  again.

  DevStack all-in-one built from master
  Perceived severity: medium

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1554151/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554876] [NEW] router not found warning logs in the L3 agent

2016-03-08 Thread Kevin Benton
Public bug reported:

The L3 agent during a normal tempest run will be filled with warnings
like the following:

2016-03-08 10:10:30.465 18962 WARNING neutron.agent.l3.agent [-] Info for 
router 3688a110-8cfe-41c6-84e3-bfd965238304 was not found. Performing router 
cleanup
2016-03-08 10:10:34.197 18962 WARNING neutron.agent.l3.agent [-] Info for 
router 3688a110-8cfe-41c6-84e3-bfd965238304 was not found. Performing router 
cleanup
2016-03-08 10:10:35.535 18962 WARNING neutron.agent.l3.agent [-] Info for 
router 3688a110-8cfe-41c6-84e3-bfd965238304 was not found. Performing router 
cleanup
2016-03-08 10:10:43.025 18962 WARNING neutron.agent.l3.agent [-] Info for 
router 3688a110-8cfe-41c6-84e3-bfd965238304 was not found. Performing router 
cleanup
2016-03-08 10:10:47.029 18962 WARNING neutron.agent.l3.agent [-] Info for 
router 3688a110-8cfe-41c6-84e3-bfd965238304 was not found. Performing router 
cleanup


This is completely normal as routers are deleted from the server during the 
data retrieval process of the L3 agent and should not be at the warning level.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554876

Title:
  router not found warning logs in the L3 agent

Status in neutron:
  New

Bug description:
  The L3 agent during a normal tempest run will be filled with warnings
  like the following:

  2016-03-08 10:10:30.465 18962 WARNING neutron.agent.l3.agent [-] Info for 
router 3688a110-8cfe-41c6-84e3-bfd965238304 was not found. Performing router 
cleanup
  2016-03-08 10:10:34.197 18962 WARNING neutron.agent.l3.agent [-] Info for 
router 3688a110-8cfe-41c6-84e3-bfd965238304 was not found. Performing router 
cleanup
  2016-03-08 10:10:35.535 18962 WARNING neutron.agent.l3.agent [-] Info for 
router 3688a110-8cfe-41c6-84e3-bfd965238304 was not found. Performing router 
cleanup
  2016-03-08 10:10:43.025 18962 WARNING neutron.agent.l3.agent [-] Info for 
router 3688a110-8cfe-41c6-84e3-bfd965238304 was not found. Performing router 
cleanup
  2016-03-08 10:10:47.029 18962 WARNING neutron.agent.l3.agent [-] Info for 
router 3688a110-8cfe-41c6-84e3-bfd965238304 was not found. Performing router 
cleanup

  
  This is completely normal as routers are deleted from the server during the 
data retrieval process of the L3 agent and should not be at the warning level.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1554876/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1527585] Re: nova 500's while creating image of a server with image name length more than 256 chars

2016-03-08 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1527585

Title:
  nova 500's while creating image of a server with image name length
  more than 256 chars

Status in OpenStack Compute (nova):
  Expired

Bug description:
  The issue is found in Kilo release

  dpkg -l | grep nova
  ii  nova-common  1:2015.1.2-0ubuntu2~cloud0   
 all  OpenStack Compute - common files
  ii  nova-compute 1:2015.1.2-0ubuntu2~cloud0   
 all  OpenStack Compute - compute node base
  ii  nova-compute-libvirt 1:2015.1.2-0ubuntu2~cloud0   
 all  OpenStack Compute - compute node libvirt support
  ii  nova-compute-qemu1:2015.1.2-0ubuntu2~cloud0   
 all  OpenStack Compute - compute node (QEmu)
  ii  python-nova  1:2015.1.2-0ubuntu2~cloud0   
 all  OpenStack Compute Python libraries
  ii  python-novaclient1:2.22.0-0ubuntu2~cloud0 
 all  client library for OpenStack Compute API

  Below are the steps to reproduce:

  Step1: Launch an instance

  curl -i
  'https://controller:8774/v2/df02c9aceac841b2a3b98e2ac1816a5f/servers'
  -X POST -H 'Content-Type: application/json' -H 'Accept:
  application/json' -H 'X-Auth-Token: 9f3a79bb912d42f896c2bdfd7163c991'
  -d '{"server": {"name": "test_server", "imageRef": "ea54375f-a924
  -4b3e-b05d-a47b8a8a1aee", "block_device_mapping_v2": [{"boot_index":
  0, "uuid": "ea54375f-a924-4b3e-b05d-a47b8a8a1aee", "source_type":
  "image", "device_name": "vda", "volume_size": "4", "destination_type":
  "volume", "delete_on_termination": "1"}], "flavorRef": "607e8e9e-971c-
  4c60-b0cb-ddf9bc8adc5d", "user_data":
  "IyEvYmluL3NoCnN1ZG8gaWZjb25maWcgZXRoMCBtdHUgMTQwMA==", "networks":
  [{"uuid": "5e5c99a6-3395-47ca-afb1-3cebe261a22a"}]}}'

  Step2: Create image of this instance once it becomes ACTIVE

  curl -i 
'https://controller:8774/v2/df02c9aceac841b2a3b98e2ac1816a5f/servers/6f94cd11-a7b2-4a13-99ec-c2f7522e2ff6/action'
 -X POST -H 'Content-Type: application/json' -H 'Accept: application/json' -H 
'X-Auth-Token: 9f3a79bb912d42f896c2bdfd7163c991'  -d '{"createImage": {"name": 
"image-name-"}}'
  HTTP/1.1 500 Internal Server Error
  Date: Fri, 18 Dec 2015 17:07:54 GMT
  Server: Apache/2.4.7 (Ubuntu)
  Access-Control-Allow-Origin: *
  Access-Control-Allow-Headers: Accept, Content-Type, X-Auth-Token, 
X-Subject-Token
  Access-Control-Expose-Headers: Accept, Content-Type, X-Auth-Token, 
X-Subject-Token
  Access-Control-Allow-Methods: GET POST OPTIONS PUT DELETE PATCH
  Content-Length: 128
  Content-Type: application/json; charset=UTF-8
  X-Compute-Request-Id: req-7435e18d-558e-4d78-abb6-3d39b329b4d0
  Via: 1.1 compute.jiocloud.com:8774
  Connection: close

  {"computeFault": {"message": "The server has either erred or is
  incapable of performing the requested operation.", "code": 500}}

  in nova-api.log following traceback was found.

  2015-12-18 17:07:54.667 30668 ERROR nova.api.openstack 
[req-7435e18d-558e-4d78-abb6-3d39b329b4d0 57fb696ef2f14240b79bbb796d1eff78 
df02c9aceac841b2a3b98e2ac1816a5f - - -] Caught error: Internal Server Error 
(HTTP 500) (Request-ID: req-0b751c03-8ce4-4a25-b8ad-5d3b162c44f1)
  2015-12-18 17:07:54.667 30668 TRACE nova.api.openstack Traceback (most recent 
call last):
  2015-12-18 17:07:54.667 30668 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py", line 125, in 
__call__
  2015-12-18 17:07:54.667 30668 TRACE nova.api.openstack return 
req.get_response(self.application)
  2015-12-18 17:07:54.667 30668 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1320, in send
  2015-12-18 17:07:54.667 30668 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2015-12-18 17:07:54.667 30668 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1284, in 
call_application
  2015-12-18 17:07:54.667 30668 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2015-12-18 17:07:54.667 30668 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2015-12-18 17:07:54.667 30668 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-12-18 17:07:54.667 30668 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/keyston

[Yahoo-eng-team] [Bug 1554696] Re: Neutron server log filled with "device requested by agent not found"

2016-03-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/290101
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=cb7e70497f68c295080add6c739743e6ea81e824
Submitter: Jenkins
Branch:master

commit cb7e70497f68c295080add6c739743e6ea81e824
Author: Kevin Benton 
Date:   Tue Mar 8 11:54:25 2016 -0800

Downgrade "device not found" log message

This message is a normal occurence when an agent requests
details about a port that was just deleted. It's not a
warning condition that the operator can take any action on
so this patch downgrades it to debug.

Change-Id: I4c19993f03bf4d417bff2d3d45bb40b1b732650d
Closes-Bug: #1554696


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554696

Title:
  Neutron server log filled with "device requested by agent not found"

Status in neutron:
  Fix Released

Bug description:
  The neutron server logs from a normal test run are filled with the
  following entries:

  2016-03-08 10:07:29.265 18894 WARNING neutron.plugins.ml2.rpc [req-
  c5cf3153-b01f-4be7-88f6-730e28fa4d09 - -] Device 91993890-6352-4488
  -9e1f-1a419fa17bc1 requested by agent ovs-agent-devstack-trusty-ovh-
  bhs1-8619597 not found in database

  
  This occurs whenever an agent requests details about a recently deleted port. 
It's not a valid warning condition.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1554696/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554695] Re: network not found warnings in test runs

2016-03-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/290098
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=0db4ba3f855d84a3d7522aa17dc89c4792d2ef4a
Submitter: Jenkins
Branch:master

commit 0db4ba3f855d84a3d7522aa17dc89c4792d2ef4a
Author: Kevin Benton 
Date:   Tue Mar 8 11:48:01 2016 -0800

Downgrade network not found log in DHCP RPC

This is a normal occurence as networks are created and
deleted. It's not something that an operator should be
warned about.

Change-Id: I3bb498a29d93a88b059a669d510d21b4c65ea014
Closes-Bug: #1554695


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554695

Title:
  network not found warnings in test runs

Status in neutron:
  Fix Released

Bug description:
  A neutron server log from a normal test run is filled with entries
  like the following:

  2016-03-08 10:08:32.269 18894 WARNING
  neutron.api.rpc.handlers.dhcp_rpc [req-a55cec8d-37ee-
  46b7-97f3-aadf91bcd512 - -] Network
  1fd1dfd5-8d24-4016-b8e4-032ec8ef3ce1 could not be found, it might have
  been deleted concurrently.

  
  They are completely normal during network creation/deletion so it's not a 
warning condition.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1554695/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551530] Re: With snat disabled legacy router Pings to floating IPs replied with fixed-ips

2016-03-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/286392
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=cea149212e6387932eaac8448c951d2ceb7ae23d
Submitter: Jenkins
Branch:master

commit cea149212e6387932eaac8448c951d2ceb7ae23d
Author: Hong Hui Xiao 
Date:   Tue Mar 1 05:42:42 2016 +

Add fip nat rules even if router disables shared snat

For legacy router, there are some iptables rules added for external gateway
port. Some of these rules are for shared snat, some are for floating ip.

When user disables shared snat of router gateway, some of the iptables rules
that floating ip needs will not be added to router. This will cause the
reported bug, ping floating ip but reply with fixed ip.

The fix will add the iptables rules that floating ip needs, no matter if
router enables shared snat. A functional test is also added for the issue.

Change-Id: I3cf4dff90f47a720a2e6a92c9ede2bc067ebd6e7
Closes-Bug: #1551530


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551530

Title:
  With snat disabled legacy router Pings to floating IPs replied with
  fixed-ips

Status in neutron:
  Fix Released

Bug description:
  On my single node devstack setup, there are 2 VMs hosted. VM1 has no floating 
IP assigned. VM2 has a floating IP assigned. From VM1, ping to VM2 using the 
floating IP. Ping output reports the replies comes from VM2's fixed ip address.
  The reply should be from VM2's floating ip address.

  VM1: 10.0.0.4
  VM2: 10.0.0.3  floating ip:172.24.4.4

  $ ping 172.24.4.4 -c 1 -W 1
  PING 172.24.4.4 (172.24.4.4): 56 data bytes
  64 bytes from 10.0.0.3: seq=0 ttl=64 time=3.440 ms

  This will only happen for legacy router with snat disabled, and at the
  same time, VM1 and VM2 are in the same subnet.

  Compared the iptables, this following rule is missed when snat is
  disabled.

  Chain neutron-vpn-agen-snat (1 references)
   pkts bytes target prot opt in out source   
destination 
  184 SNAT   all  --  *  *   0.0.0.0/00.0.0.0/0 
   mark match ! 0x2/0x ctstate DNAT to:172.24.4.6

  This rule will SNAT internal traffic to floatingip. Without this rule,
  the packet of VM2 replying VM1 will be treated as a traffic inside
  subnet, and these traffic will not go through router. As a result, the
  DNAT record in router namespace will not work for reply packet.

  The intentional fix will add the mentioned iptables rule, no matter of
  snat enabling. So, the packet of VM2 replying VM1 will dest to
  <172.24.4.6>, and go through router namespace. As a result, the DNAT
  and SNAT record will work to make things right.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551530/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554922] [NEW] External DNS driver fails with Python 3.4 with an index computed with a division operation when slicing a string

2016-03-08 Thread Miguel Lavalle
Public bug reported:

The external DNS driver for Designate has method _get_in_addr_zone_name
that utilizes a division operation to compute an index to slice a
string. The string slicing fails if the index is not explicitly
converted to int in python 3.4

** Affects: neutron
 Importance: Medium
 Assignee: Miguel Lavalle (minsel)
 Status: New

** Changed in: neutron
   Importance: Undecided => Medium

** Changed in: neutron
 Assignee: (unassigned) => Miguel Lavalle (minsel)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554922

Title:
  External DNS driver fails with Python 3.4 with an index computed with
  a division operation  when slicing a string

Status in neutron:
  New

Bug description:
  The external DNS driver for Designate has method
  _get_in_addr_zone_name that utilizes a division operation to compute
  an index to slice a string. The string slicing fails if the index is
  not explicitly converted to int in python 3.4

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1554922/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554930] [NEW] nova add-secgroup help needs to be updated

2016-03-08 Thread Abhishek Talwar
Public bug reported:

Currently, nova add-secgroup requests the user to enter  only secgroup name's 
to  add a secgroup
to a server. But the command works fine with both secgroup name or secgroup id.
So the help needs to be updated to reflect the same.

Steps to reproduce the bug:-

1. Create a secgroup named 'test' using the command 'nova secgroup-
create':-

$ nova secgroup-create test testing
+--+--+-+
--
| Id| Name  
 | Description |
+--+--+-+
--
| d9b61d5a-4f9e-4b41-9fd5-e412aaeecb6b | test | testing   |
+--+--+-+
 --


2. Now add secgroup 'test' to a server using the secgroup-id

$ nova add-secgroup vm1  d9b61d5a-4f9e-4b41-9fd5-e412aaeecb6b

The command runs successfully and the secgroup gets added to the server.

** Affects: python-novaclient
 Importance: Undecided
 Assignee: Abhishek Talwar (abhishek-talwar)
 Status: In Progress


** Tags: python-novaclient

** Project changed: nova => python-novaclient

** Changed in: python-novaclient
   Status: New => In Progress

** Changed in: python-novaclient
 Assignee: (unassigned) => Abhishek Talwar (abhishek-talwar)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1554930

Title:
  nova add-secgroup help needs to be updated

Status in python-novaclient:
  In Progress

Bug description:
  Currently, nova add-secgroup requests the user to enter  only secgroup name's 
to  add a secgroup
  to a server. But the command works fine with both secgroup name or secgroup 
id.
  So the help needs to be updated to reflect the same.

  Steps to reproduce the bug:-

  1. Create a secgroup named 'test' using the command 'nova secgroup-
  create':-

  $ nova secgroup-create test testing
  +--+--+-+
  --
  | Id| Name
   | Description |
  +--+--+-+
  --
  | d9b61d5a-4f9e-4b41-9fd5-e412aaeecb6b | test | testing   |
  +--+--+-+
   --

  
  2. Now add secgroup 'test' to a server using the secgroup-id

  $ nova add-secgroup vm1  d9b61d5a-4f9e-4b41-9fd5-e412aaeecb6b

  The command runs successfully and the secgroup gets added to the
  server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-novaclient/+bug/1554930/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554601] Re: able to update health monitor attributes which is attached to pool in lbaas

2016-03-08 Thread abhishek6254
** Also affects: f5openstackcommunitylbaas
   Importance: Undecided
   Status: New

** Changed in: f5openstackcommunitylbaas
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554601

Title:
  able to update health monitor attributes which is attached to pool in
  lbaas

Status in OpenStack Neutron LBaaS Integration:
  Confirmed
Status in neutron:
  Confirmed

Bug description:
  Reproduced a bug in Load Balancer:
  1.created a pool
  2.attached members to pool1
  3.then associate health monitor to pool
  4.associate VIP to pool
  5.when I edit the  attributes of "health monitor" it shows me error like in 
"Error: Failed to update health monitor " but it is updated successfully.

To manage notifications about this bug go to:
https://bugs.launchpad.net/f5openstackcommunitylbaas/+bug/1554601/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554942] [NEW] Floating ip Is not reachable when data nic is made up again ( in testing L3 HA VRRP feature)

2016-03-08 Thread Abhishek G M
Public bug reported:

steps to reproduce:
1) create a router
2) set router gateway with the public network
3) router-interface-add with the submnet
4) boot a vm
5) create a floating ip
6) associate a floating ip with the booted vm port id
7) continuously ping floating ip
8) Make active controller data nic down
9) Make Previous active controller data nic up
Floating ip is not reachable after doing above steps

Before stating testing scenario
stack@padawan-ccp-c1-m1-mgmt:~$ neutron l3-agent-list-hosting-router 
b4fd1fd4-1c21-4be3-8672-e085c3e195f9
+--+++---+--+
| id   | host   | 
admin_state_up | alive | ha_state |
+--+++---+--+
| da302b3f-b8c6-4068-9abd-6486cd6d0654 | padawan-ccp-c1-m3-mgmt | True  
 | :-)   | standby  |
| fc1cd20b-81dd-4bbc-bfb8-5652c08dc0c3 | padawan-ccp-c1-m1-mgmt | True  
 | :-)   | active   |
+--+++---+--+
After making data niic down of the active controller
stack@padawan-ccp-c1-m1-mgmt:~$ neutron l3-agent-list-hosting-router 
b4fd1fd4-1c21-4be3-8672-e085c3e195f9
+--+++---+--+
| id   | host   | 
admin_state_up | alive | ha_state |
+--+++---+--+
| da302b3f-b8c6-4068-9abd-6486cd6d0654 | padawan-ccp-c1-m3-mgmt | True  
 | :-)   | active   |
| fc1cd20b-81dd-4bbc-bfb8-5652c08dc0c3 | padawan-ccp-c1-m1-mgmt | True  
 | :-)   | active   |
+--+++---+--+
After making data nic up of the previous active controller
stack@padawan-ccp-c1-m1-mgmt:~$ neutron l3-agent-list-hosting-router 
b4fd1fd4-1c21-4be3-8672-e085c3e195f9
+--+++---+--+
| id   | host   | 
admin_state_up | alive | ha_state |
+--+++---+--+
| da302b3f-b8c6-4068-9abd-6486cd6d0654 | padawan-ccp-c1-m3-mgmt | True  
 | :-)   | standby  |
| fc1cd20b-81dd-4bbc-bfb8-5652c08dc0c3 | padawan-ccp-c1-m1-mgmt | True  
 | :-)   | active   |
+--+++---+--+
Below is the ping output of the floating ip
64 bytes from 172.21.11.11: icmp_seq=1 ttl=62 time=1.79 ms
64 bytes from 172.21.11.11: icmp_seq=2 ttl=62 time=1.52 ms
64 bytes from 172.21.11.11: icmp_seq=3 ttl=62 time=0.859 ms
64 bytes from 172.21.11.11: icmp_seq=4 ttl=62 time=0.937 ms
64 bytes from 172.21.11.11: icmp_seq=5 ttl=62 time=0.645 ms
64 bytes from 172.21.11.11: icmp_seq=6 ttl=62 time=1.45 ms
64 bytes from 172.21.11.11: icmp_seq=7 ttl=62 time=0.804 ms
64 bytes from 172.21.11.11: icmp_seq=8 ttl=62 time=0.872 ms
64 bytes from 172.21.11.11: icmp_seq=9 ttl=62 time=0.780 ms
64 bytes from 172.21.11.11: icmp_seq=10 ttl=62 time=0.719 ms
64 bytes from 172.21.11.11: icmp_seq=11 ttl=62 time=1.19 ms
64 bytes from 172.21.11.11: icmp_seq=12 ttl=62 time=1.40 ms
64 bytes from 172.21.11.11: icmp_seq=13 ttl=62 time=0.633 ms
64 bytes from 172.21.11.11: icmp_seq=14 ttl=62 time=0.888 ms
64 bytes from 172.21.11.11: icmp_seq=15 ttl=62 time=1.76 ms
64 bytes from 172.21.11.11: icmp_seq=16 ttl=62 time=0.906 ms
64 bytes from 172.21.11.11: icmp_seq=17 ttl=62 time=0.727 ms
64 bytes from 172.21.11.11: icmp_seq=18 ttl=62 time=1.38 ms
64 bytes from 172.21.11.11: icmp_seq=19 ttl=62 time=0.719 ms
64 bytes from 172.21.11.11: icmp_seq=20 ttl=62 time=0.778 ms
64 bytes from 172.21.11.11: icmp_seq=21 ttl=62 time=1.58 ms
64 bytes from 172.21.11.11: icmp_seq=22 ttl=62 time=0.717 ms
64 bytes from 172.21.11.11: icmp_seq=23 ttl=62 time=0.782 ms
64 bytes from 172.21.11.11: icmp_seq=24 ttl=62 time=1.01 ms
64 bytes from 172.21.11.11: icmp_seq=25 ttl=62 time=0.709 ms
64 bytes from 172.21.11.11: icmp_seq=26 ttl=62 time=1.03 ms
64 bytes from 172.21.11.11: icmp_seq=27 ttl=62 time=0.826 ms
64 bytes from 172.21.11.11: icmp_seq=28 ttl=62 time=0.715 ms
64 bytes from 172.21.11.11: icmp_seq=29 ttl=62 time=0.974 ms
64 bytes from 172.21.11.11: icmp_seq=30 ttl=62 time=0.826 ms
64 bytes from 172.21.11.11: icmp_seq=39 ttl=62 time=3.61 ms
64 bytes from 172.21.11.11: icmp_seq=40 ttl=62 time=0.841 ms
64 bytes from 172.21.11.11: icmp_seq=41 ttl=62 time=0.824 ms


And gateway also not pinging.
Floating ip works from router namespace but not from outside
stack@padawan-ccp-c1-m1-mgmt:~$ sudo ip netns exec 
qrouter-b4fd1fd4-1c21-4be3-8672-e085c3e195f9 bash
root@padawan-ccp-c1-m1-mgmt:/home/stack# pin