[Yahoo-eng-team] [Bug 1430984] Re: Recent RPC namespacing breaks rolling upgrades

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1430984

Title:
  Recent RPC namespacing breaks rolling upgrades

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in Messaging API for OpenStack:
  Fix Released

Bug description:
  Several patches merged as a part of:
  
https://review.openstack.org/#/q/status:merged+project:openstack/neutron+branch:master+topic:bp/rpc-docs-and-namespaces,n,z

  Broke Neutron rolling upgrade (Specifically: Upgrading the server(s)
  before the agents or vice versa). This was done knowingly and
  discussed in the spec process. While we don't test a rolling upgrade
  scenario, there is no reason to break it knowingly. I've spoken to
  operators that have successfully performed such an upgrade from I to J
  and it will be very surprising to them if the same doesn't work from J
  to K.

  The breakage comes from the introduction of RPC namespaces, a very
  useful concept of putting RPC endpoints in separate namespaces. i.e.
  you may place the same method name listening in the same process if it
  belongs to different namespaces.

  Possible solutions:
  Have the server listen on both the new namespaces, and in the root namespace. 
However, this effectively brings all such methods into one big namespace, so 
this kind of defeats the purpose of namespacing. We could delay by making a 
change in Oslo messaging where if a new and optional backwards_compatibility 
flag is passed in to a target along with a namespace, then the dispatcher will 
check against the namespace as well as the root namespace, and we simply stop 
passing the flag in the L cycle (This means that we only support rolling 
upgrades from version N to N+1). In order to support a scenario where an agent 
is upgraded before a server, then even with the proposed solution, all of the K 
agents would have to implement a fallback.

  Testing:
  I've been working on basic RPC tests:
  
https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:master+topic:rpc_tests,n,z

  But I don't think such a framework will allow us to test rolling
  upgrades. I can't think of an alternative to actually performing one
  and seeing what happens (A spin on the grenade job).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1430984/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356053] Re: Doesn't properly get keystone endpoint when Keystone is configured to use templated catalog

2015-04-09 Thread Thierry Carrez
** Changed in: sahara
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1356053

Title:
  Doesn't properly get keystone endpoint when Keystone is configured to
  use templated catalog

Status in devstack - openstack dev environments:
  In Progress
Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in Python client library for Sahara (ex. Savanna):
  Fix Released
Status in OpenStack Data Processing (Sahara):
  Fix Released
Status in Tempest:
  In Progress

Bug description:
  When using the keystone static catalog file to register endpoints 
(http://docs.openstack.org/developer/keystone/configuration.html#file-based-service-catalog-templated-catalog),
 an endpoint registered (correctly) as catalog.region.data_processing gets 
read as "data-processing" by keystone.
  Thus, when Sahara looks for an endpoint, it is unable to find one for 
data_processing.

  This causes a problem with the commandline interface and the
  dashboard.

  Keystone seems to be converting underscores to dashes here:
  
https://github.com/openstack/keystone/blob/master/keystone/catalog/backends/templated.py#L47

  modifying this line to not perform the replacement seems to work fine
  for me, but may have unintended consequences.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1356053/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1392773] Re: Live migration of volume backed instances broken after upgrade to Juno

2015-04-09 Thread John Garbutt
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1392773

Title:
  Live migration of volume backed instances broken after upgrade to Juno

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Committed

Bug description:
  I'm running nova in a virtualenv with a checkout of stable/juno:

  root@compute1:/opt/openstack/src/nova# git branch
stable/icehouse
  * stable/juno
  root@compute1:/opt/openstack/src/nova# git rev-list stable/juno | head -n 1
  54330ce33ee31bbd84162f0af3a6c74003d57329

  Since upgrading from icehouse, our iscsi backed instances are no
  longer able to live migrate, throwing exceptions like:

  Traceback (most recent call last):
File 
"/opt/openstack/venv/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 134, in _dispatch_and_reply
  incoming.message))
File 
"/opt/openstack/venv/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 177, in _dispatch
  return self._do_dispatch(endpoint, method, ctxt, args)
File 
"/opt/openstack/venv/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 123, in _do_dispatch
  result = getattr(endpoint, method)(ctxt, **new_args)
File 
"/opt/openstack/venv/nova/local/lib/python2.7/site-packages/nova/exception.py", 
line 88, in wrapped
  payload)
File 
"/opt/openstack/venv/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
 line 82, in __exit__
  six.reraise(self.type_, self.value, self.tb)
File 
"/opt/openstack/venv/nova/local/lib/python2.7/site-packages/nova/exception.py", 
line 71, in wrapped
  return f(self, context, *args, **kw)
File 
"/opt/openstack/venv/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 326, in decorated_function
  kwargs['instance'], e, sys.exc_info())
File 
"/opt/openstack/venv/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
 line 82, in __exit__
  six.reraise(self.type_, self.value, self.tb)
File 
"/opt/openstack/venv/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 314, in decorated_function
  return function(self, context, *args, **kwargs)
File 
"/opt/openstack/venv/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 4882, in check_can_live_migrate_source
  dest_check_data)
File 
"/opt/openstack/venv/nova/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
 line 5040, in check_can_live_migrate_source
  raise exception.InvalidSharedStorage(reason=reason, path=source)
  InvalidSharedStorage: compute2 is not on shared storage: Live migration can 
not be used without shared storage.

  Looking back through the code, given dest_check_data like this:

  {u'disk_over_commit': False, u'disk_available_mb': None,
  u'image_type': u'default', u'filename': u'tmpyrUUg1',
  u'block_migration': False, 'is_volume_backed': True}

  In Icehouse the code to validate the request skipped this[0]:
   elif not shared and (not is_volume_backed or has_local_disks):

  In Juno, it matches this[1]:

   if (dest_check_data.get('is_volume_backed') and
not bool(jsonutils.loads(
  self.get_instance_disk_info(instance['name']:

  In Juno at least, get_instance_disk_info returns something like this:

  [{u'disk_size': 10737418240, u'type': u'raw', u'virt_disk_size':
  10737418240, u'path': u'/dev/disk/by-path/ip-10.0.0.1:3260-iscsi-
  iqn.2010-10.org.openstack:volume-10f2302c-26b6-44e0-a3ea-
  7033d1091470-lun-1', u'backing_file': u'',
  u'over_committed_disk_size': 0}]

  I wonder if that was previously an empty return value in Icehouse, I'm
  unable to test right now, but if it returned the same then I'm not
  sure how it ever worked before.

  This is a lab environment, the volume storage is an LVM+ISCSI cinder
  service. nova.conf and cinder.conf here[2]

  [0]: 
https://github.com/openstack/nova/blob/stable/icehouse/nova/virt/libvirt/driver.py#L4299
  [1]: 
https://github.com/openstack/nova/blob/stable/juno/nova/virt/libvirt/driver.py#L5073
  [2]: https://gist.github.com/DazWorrall/b1b1e906a6dc2338f6c1

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1392773/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435588] Re: request-ids aren't being logged for any cinder.api.openstack.wsgi [-]

2015-04-09 Thread Thierry Carrez
** Changed in: cinder
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1435588

Title:
   request-ids aren't being logged for any cinder.api.openstack.wsgi [-]

Status in Cinder:
  Fix Released
Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Identity (Keystone):
  New

Bug description:
   request-ids aren't being logged for any cinder.api.openstack.wsgi [-]

  Although request ids are logged for things like:

  2015-03-19 02:18:09.900 5394 INFO cinder.api.v2.volumes [req-31f27442
  -7b3e-4bcb-84f7-2d84ff65f3f1 9039ff7c560e4b6591f34f629fe978f4
  0c28fcc6ceac4288a51bb05721563d2c - - -] Create volume of 1 GB

  They are missing for most of the cinder logs in cinder-api:

  2015-03-19 02:18:09.899 5394 INFO cinder.api.openstack.wsgi [-] POST
  http://127.0.0.1:8776/v2/0c28fcc6ceac4288a51bb05721563d2c/volumes


  http://logs.openstack.org/16/165616/2/check/check-tempest-dsvm-
  postgres-
  full/5cd3197/logs/screen-c-api.txt.gz?#_2015-03-19_02_28_04_252

  
  This makes debugging failures signifanctly harder since you can't match the 
req-id from a failed command to the relevant cinder logs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1435588/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378450] Re: [OSSA 2014-039] Maliciously crafted dns_nameservers will crash neutron (CVE-2014-7821)

2015-04-09 Thread Adam Gandelman
** Changed in: neutron/juno
   Importance: Undecided => High

** Changed in: neutron/juno
   Status: Fix Released => Fix Committed

** Changed in: neutron/juno
Milestone: 2014.2.1 => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1378450

Title:
  [OSSA 2014-039] Maliciously crafted dns_nameservers will crash neutron
  (CVE-2014-7821)

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Committed
Status in neutron juno series:
  Fix Committed
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  The following request body will crash neutron nodes.

  {"subnet": {"network_id": "2aeb163a-a415-4568-bb9e-9c0ac93d54e4", 
"ip_version": 4, 
  "cidr": "192.168.1.3/16", 
  "dns_nameservers": 
[""]}}

  Even strace stops logging.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1378450/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1413457] Re: Big Switch: exceptions on rest_get_switch in logs

2015-04-09 Thread Adam Gandelman
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Status: New => Fix Committed

** Changed in: neutron/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1413457

Title:
  Big Switch: exceptions on rest_get_switch in logs

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  Fix Committed

Bug description:
  The current code to see if the Big Switch ML2 driver can bind a port
  as IVS does not suppress the 404 error in the server manager and
  catches it later. While this logic is technically correct, the server
  manager treats it as a failure and dumps error messages into the logs
  every time it happens so the logs become very polluted during normal
  operations.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1413457/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1442024] [NEW] AvailabilityZoneFilter sometimes does not filter when doing live migration

2015-04-09 Thread gustavo panizzo
Public bug reported:

last night our ops team live migrated (nova live-migration --block-
migrate $vm) a group of vm to do hw  maintenance.

the vm ended on a different AZ making the vm unusable (we have different 
upstream network connectivity on each AZ)
it never happened before, i tested 


of course, i have setup AZ filter


scheduler_available_filters=nova.scheduler.filters.all_filters
scheduler_default_filters=RetryFilter,AggregateInstanceExtraSpecsFilter,AvailabilityZoneFilter,RamFilter,CoreFilter,DiskFilter,ComputeFilter,ImagePropertiesFilter

i'm using icehouse 2014.1.2-0ubuntu1.1~cloud0

i will clean and upload logs right away

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1442024

Title:
  AvailabilityZoneFilter sometimes does not  filter when doing live
  migration

Status in OpenStack Compute (Nova):
  New

Bug description:
  last night our ops team live migrated (nova live-migration --block-
  migrate $vm) a group of vm to do hw  maintenance.

  the vm ended on a different AZ making the vm unusable (we have different 
upstream network connectivity on each AZ)
  it never happened before, i tested 

  
  of course, i have setup AZ filter

  
  scheduler_available_filters=nova.scheduler.filters.all_filters
  
scheduler_default_filters=RetryFilter,AggregateInstanceExtraSpecsFilter,AvailabilityZoneFilter,RamFilter,CoreFilter,DiskFilter,ComputeFilter,ImagePropertiesFilter

  i'm using icehouse 2014.1.2-0ubuntu1.1~cloud0

  i will clean and upload logs right away

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1442024/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1442040] [NEW] aggregate_hosts does not uses deleted in search indexes

2015-04-09 Thread Attila Fazekas
Public bug reported:

Now the table is declared in this way:

show create table aggregate_hosts;

CREATE TABLE `aggregate_hosts` (
  `created_at` datetime DEFAULT NULL,
  `updated_at` datetime DEFAULT NULL,
  `deleted_at` datetime DEFAULT NULL,
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `host` varchar(255) DEFAULT NULL,
  `aggregate_id` int(11) NOT NULL,
  `deleted` int(11) DEFAULT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `uniq_aggregate_hosts0host0aggregate_id0deleted` 
(`host`,`aggregate_id`,`deleted`),
  KEY `aggregate_id` (`aggregate_id`),
  CONSTRAINT `aggregate_hosts_ibfk_1` FOREIGN KEY (`aggregate_id`) REFERENCES 
`aggregates` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8


The aggregate_hosts table in this form allows to have multiple deleted records 
for the same `host`,`aggregate_id`.
Does it really needed ?

- yes
Add an INDEX/KEY with (`deleted`,`host`)  OR Change the UNIQUE KEY to start 
with `deleted` : (`deleted`, `host`,`aggregate_id`) 
Add an INDEX/KEY with (`deleted`,`aggregate_id`) or extend the aggregate_id 
Index.

- no, enough to preserve only one record
Change the UNIQUE KEY  to (`host`,`aggregate_id`)   Consider using this as a  
primary key instead of the id.
Add an INDEX/KEY with (`deleted`,`aggregate_id`) OR extend the aggregate_id 
Index.
Add an INDEX/KEY with (`deleted`,`host`)

- not at all
  Change the UNIQUE KEY  (`host`,`aggregate_id`)   Consider using this as a  
primary key instead of the `id`.
  remove the `updated_at`, `deleted_at` , `deleted`  fields.

Note: `host` field should reference to an another table.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1442040

Title:
  aggregate_hosts does not uses deleted in search indexes

Status in OpenStack Compute (Nova):
  New

Bug description:
  Now the table is declared in this way:

  show create table aggregate_hosts;

  CREATE TABLE `aggregate_hosts` (
`created_at` datetime DEFAULT NULL,
`updated_at` datetime DEFAULT NULL,
`deleted_at` datetime DEFAULT NULL,
`id` int(11) NOT NULL AUTO_INCREMENT,
`host` varchar(255) DEFAULT NULL,
`aggregate_id` int(11) NOT NULL,
`deleted` int(11) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_aggregate_hosts0host0aggregate_id0deleted` 
(`host`,`aggregate_id`,`deleted`),
KEY `aggregate_id` (`aggregate_id`),
CONSTRAINT `aggregate_hosts_ibfk_1` FOREIGN KEY (`aggregate_id`) REFERENCES 
`aggregates` (`id`)
  ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8

  
  The aggregate_hosts table in this form allows to have multiple deleted 
records for the same `host`,`aggregate_id`.
  Does it really needed ?

  - yes
  Add an INDEX/KEY with (`deleted`,`host`)  OR Change the UNIQUE KEY to start 
with `deleted` : (`deleted`, `host`,`aggregate_id`) 
  Add an INDEX/KEY with (`deleted`,`aggregate_id`) or extend the aggregate_id 
Index.

  - no, enough to preserve only one record
  Change the UNIQUE KEY  to (`host`,`aggregate_id`)   Consider using this as a  
primary key instead of the id.
  Add an INDEX/KEY with (`deleted`,`aggregate_id`) OR extend the aggregate_id 
Index.
  Add an INDEX/KEY with (`deleted`,`host`)

  - not at all
Change the UNIQUE KEY  (`host`,`aggregate_id`)   Consider using this as a  
primary key instead of the `id`.
remove the `updated_at`, `deleted_at` , `deleted`  fields.

  Note: `host` field should reference to an another table.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1442040/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1442189] [NEW] FWaaS UT cleanup in Project dashboard

2015-04-09 Thread Abishek Subramanian
Public bug reported:

With the new addition of the router insertion support in the FWaaS panel
of the project dashboard, there needs to be a couple new test cases
added.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1442189

Title:
  FWaaS UT cleanup in Project dashboard

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  With the new addition of the router insertion support in the FWaaS
  panel of the project dashboard, there needs to be a couple new test
  cases added.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1442189/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1323492] Re: Name "_" not defined

2015-04-09 Thread Kamil Rykowski
I'm invalidating this bug report as it is in incomplete state for enough
period of time. Feel free to reopen the bug by providing the requested
information and set the bug status back to ''New'', so we will take a
closer look again on it!

** Changed in: glance
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1323492

Title:
  Name "_" not defined

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  On running openstack dashboard, when it can't find glance installed on
  my machine, it is supposed to raise an exception and when raising this
  exception it throws the following error.

  Name error at /
  name "_" is not defined in glance/common/excpetion.py

  The error is at line 38 of that file.

  I think the function "_" is not imported from gettextutils

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1323492/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1438920] Re: specification of a nova network to instance during icehouse->juno partial upgrade fails

2015-04-09 Thread Adam Gandelman
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Status: New => Fix Committed

** Changed in: nova/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1438920

Title:
  specification of a nova network to instance during icehouse->juno
  partial upgrade fails

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Committed

Bug description:
  Tempest recently made some changes around specifying fixed network to
  instances during spawn.  When this happens during an icehouse->juno
  partial upgrade, we fail to spawn instances and they are stuck in
  scheduling.

  Conductor logs show:

  oslo.messaging.rpc.dispatcher [req-047d24c2-6477-4838-afde-
  cc3c7371b19e VolumesV2ActionsTest-21337662
  VolumesV2ActionsTest-1578486140] Exception during message handling:
  need more than 2 values to unpack

  It looks like there is code in Nova to handle this case during an icehouse 
partial upgrade, but only takes into account doing so with Neutron:
  
https://git.openstack.org/cgit/openstack/nova/tree/nova/compute/rpcapi.py#n982  
This needs some special casing to handle NetworkRequestList objects for 
nova-network, as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1438920/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421037] Re: [metering agent] Failed to get any traffic data if the first chain is missing

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1421037

Title:
  [metering agent] Failed to get any traffic data if the first chain is
  missing

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  Based on current implement, if there is any reason caused the chain of
  metering label is missing, the whole function of
  iptables_driver.get_traffic_counters() will be broken, see
  
https://github.com/openstack/neutron/blob/master/neutron/services/metering/drivers/iptables/iptables_driver.py#L275

  
  2015-02-03 22:47:25.486 6384 ERROR 
neutron.services.metering.agents.metering_agent 
[req-47dbfc00-1812-423a-9f68-ce26dc54243a None] Driver 
neutron.services.metering.drivers.iptables.iptables_driver.IptablesMeteringDriver:get_traffic_counters
 runtime error
  2015-02-03 22:47:25.486 6384 TRACE 
neutron.services.metering.agents.metering_agent Traceback (most recent call 
last):
  2015-02-03 22:47:25.486 6384 TRACE 
neutron.services.metering.agents.metering_agent   File 
"/usr/lib/python2.7/dist-packages/neutron/services/metering/agents/metering_agent.py",
 line 177, in _invoke_driver
  2015-02-03 22:47:25.486 6384 TRACE 
neutron.services.metering.agents.metering_agent return 
getattr(self.metering_driver, func_name)(context, meterings)
  2015-02-03 22:47:25.486 6384 TRACE 
neutron.services.metering.agents.metering_agent   File 
"/usr/lib/python2.7/dist-packages/neutron/common/log.py", line 34, in wrapper
  2015-02-03 22:47:25.486 6384 TRACE 
neutron.services.metering.agents.metering_agent return method(*args, 
**kwargs)
  2015-02-03 22:47:25.486 6384 TRACE 
neutron.services.metering.agents.metering_agent   File 
"/usr/lib/python2.7/dist-packages/neutron/services/metering/drivers/iptables/iptables_driver.py",
 line 275, in get_traffic_counters
  2015-02-03 22:47:25.486 6384 TRACE 
neutron.services.metering.agents.metering_agent chain, wrap=False, 
zero=True)
  2015-02-03 22:47:25.486 6384 TRACE 
neutron.services.metering.agents.metering_agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/iptables_manager.py", 
line 627, in get_traffic_counters
  2015-02-03 22:47:25.486 6384 TRACE 
neutron.services.metering.agents.metering_agent 
root_helper=self.root_helper))
  2015-02-03 22:47:25.486 6384 TRACE 
neutron.services.metering.agents.metering_agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py", line 82, in 
execute
  2015-02-03 22:47:25.486 6384 TRACE 
neutron.services.metering.agents.metering_agent raise RuntimeError(m)
  2015-02-03 22:47:25.486 6384 TRACE 
neutron.services.metering.agents.metering_agent RuntimeError:
  2015-02-03 22:47:25.486 6384 TRACE 
neutron.services.metering.agents.metering_agent Command: ['sudo', 
'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-a9123cfe-9277-49ea-a2c5-1c9581e277d2', 'iptables', '-t', 'filter', 
'-L', 'neutron-meter-l-c760740b-33a', '-n', '-v', '-x', '-Z']
  2015-02-03 22:47:25.486 6384 TRACE 
neutron.services.metering.agents.metering_agent Exit code: 1
  2015-02-03 22:47:25.486 6384 TRACE 
neutron.services.metering.agents.metering_agent Stdout: ''
  2015-02-03 22:47:25.486 6384 TRACE 
neutron.services.metering.agents.metering_agent Stderr: 'iptables: No 
chain/target/match by that name.\n'
  2015-02-03 22:47:25.486 6384 TRACE 
neutron.services.metering.agents.metering_agent

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1421037/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1442146] [NEW] Horizon Ajax cleanup

2015-04-09 Thread Sam Betts
Public bug reported:

Clean up of horizon Ajax code to allow for better management of many
tasks that have to wait for each other to finish.

** Affects: horizon
 Importance: Undecided
 Assignee: Sam Betts (sambetts)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Sam Betts (sambetts)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1442146

Title:
  Horizon Ajax cleanup

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Clean up of horizon Ajax code to allow for better management of many
  tasks that have to wait for each other to finish.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1442146/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1442288] [NEW] Setting 'auto_fade_alerts' is working only for simple request, not ajax

2015-04-09 Thread Timur Sufiev
Public bug reported:

There is 'auto_fade_alerts' in HORIZON_CONFIG, but it works only for
messages returned on simple request-response cycle and not for messages
returned on ajax requests.

It would make sense to allow configuration of request types for which it
does work.

** Affects: horizon
 Importance: Low
 Assignee: Timur Sufiev (tsufiev-x)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1442288

Title:
  Setting 'auto_fade_alerts' is working only for simple request, not
  ajax

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  There is 'auto_fade_alerts' in HORIZON_CONFIG, but it works only for
  messages returned on simple request-response cycle and not for
  messages returned on ajax requests.

  It would make sense to allow configuration of request types for which
  it does work.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1442288/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1407887] Re: Linux bridge agent should also handle empty before/after notifications properly

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1407887

Title:
  Linux bridge agent should also  handle empty before/after
  notifications properly

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  I am seeing the bug: https://bugs.launchpad.net/neutron/+bug/1367881
  And it fixed the bug by handling empty before/after notifications in l2pop 
code.
  Then I find there is the same problem in the method "_fdb_chg_ip" of   
linuxbridge_neutron_agent.

  after = state.get('after')
  for mac, ip in after:
  self.agent.br_mgr.add_fdb_ip_entry(mac, ip, interface)

   before = state.get('before')
  for mac, ip in before:
  self.agent.br_mgr.remove_fdb_ip_entry(mac, ip, interface)

  I think we should also change it.Otherwise, it may cause some similar
  problems.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1407887/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1442006] [NEW] failed to reach SHELVED_OFFLOADED status within the required time

2015-04-09 Thread Abhishek Kekane
Public bug reported:

check-grenade-dsvm-partial-ncpu is failing for
'tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_instance'
with following error,

Captured traceback:
2015-04-08 16:08:42.672 | ~~~
2015-04-08 16:08:42.672 | Traceback (most recent call last):
2015-04-08 16:08:42.673 |   File "tempest/test.py", line 129, in wrapper
2015-04-08 16:08:42.673 | return f(self, *func_args, **func_kwargs)
2015-04-08 16:08:42.673 |   File 
"tempest/scenario/test_shelve_instance.py", line 94, in test_shelve_instance
2015-04-08 16:08:42.674 | self._shelve_then_unshelve_server(server)
2015-04-08 16:08:42.674 |   File 
"tempest/scenario/test_shelve_instance.py", line 54, in 
_shelve_then_unshelve_server
2015-04-08 16:08:42.674 | server['id'], 'SHELVED_OFFLOADED', 
extra_timeout=offload_time)
2015-04-08 16:08:42.675 |   File 
"tempest/services/compute/json/servers_client.py", line 183, in 
wait_for_server_status
2015-04-08 16:08:42.675 | ready_wait=ready_wait)
2015-04-08 16:08:42.675 |   File "tempest/common/waiters.py", line 94, in 
wait_for_server_status
2015-04-08 16:08:42.676 | raise exceptions.TimeoutException(message)
2015-04-08 16:08:42.676 | tempest.exceptions.TimeoutException: Request 
timed out
2015-04-08 16:08:42.676 | Details: 
(TestShelveInstance:test_shelve_instance) Server 
5623198f-fc72-4888-8fde-084a5222b147 failed to reach SHELVED_OFFLOADED status 
and task state "None" within the required time (196 s). Current status: ACTIVE. 
Current task state: None.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1442006

Title:
  failed to reach SHELVED_OFFLOADED status within the required time

Status in OpenStack Compute (Nova):
  New

Bug description:
  check-grenade-dsvm-partial-ncpu is failing for
  
'tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_instance'
  with following error,

  Captured traceback:
  2015-04-08 16:08:42.672 | ~~~
  2015-04-08 16:08:42.672 | Traceback (most recent call last):
  2015-04-08 16:08:42.673 |   File "tempest/test.py", line 129, in wrapper
  2015-04-08 16:08:42.673 | return f(self, *func_args, **func_kwargs)
  2015-04-08 16:08:42.673 |   File 
"tempest/scenario/test_shelve_instance.py", line 94, in test_shelve_instance
  2015-04-08 16:08:42.674 | self._shelve_then_unshelve_server(server)
  2015-04-08 16:08:42.674 |   File 
"tempest/scenario/test_shelve_instance.py", line 54, in 
_shelve_then_unshelve_server
  2015-04-08 16:08:42.674 | server['id'], 'SHELVED_OFFLOADED', 
extra_timeout=offload_time)
  2015-04-08 16:08:42.675 |   File 
"tempest/services/compute/json/servers_client.py", line 183, in 
wait_for_server_status
  2015-04-08 16:08:42.675 | ready_wait=ready_wait)
  2015-04-08 16:08:42.675 |   File "tempest/common/waiters.py", line 94, in 
wait_for_server_status
  2015-04-08 16:08:42.676 | raise exceptions.TimeoutException(message)
  2015-04-08 16:08:42.676 | tempest.exceptions.TimeoutException: Request 
timed out
  2015-04-08 16:08:42.676 | Details: 
(TestShelveInstance:test_shelve_instance) Server 
5623198f-fc72-4888-8fde-084a5222b147 failed to reach SHELVED_OFFLOADED status 
and task state "None" within the required time (196 s). Current status: ACTIVE. 
Current task state: None.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1442006/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412542] Re: L3 agent restart does not SIGHUP running keepalived processes

2015-04-09 Thread Adam Gandelman
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Status: New => Fix Committed

** Changed in: neutron/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1412542

Title:
  L3 agent restart does not SIGHUP running keepalived processes

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  Fix Committed

Bug description:
  Per
  
https://github.com/openstack/neutron/blob/master/neutron/agent/linux/keepalived.py#L405:

  When the L3 agent starts, it invokes keepalived_manager spawn method,
  which spawns the the underlying keepalived process, unless it's
  already running. This issue only manifests for L3 agent restarts,
  because for an already-running agent, when it reconfigures keepalived
  due to an RPC update call, it does successfully sends a SIGHUP signal
  to the process.

  The effect is that restarting a L3 agent does not SIGHUP any running
  keepalived processes. So, for example, if the L3 agent crashes and is
  started again a minute or two later (This is dependent on timers
  configured for external tools such as Pacemaker), the L3 agent resyncs
  with the controller but doesn't SIGHUP any existing keepalived
  processes. This means that any updates that happened during the L3
  agent downtime will be picked up during that initial resync, but the
  agent won't actually reconfigure keepalived.

  It is also an issue during upgrades for reasons similar to what's
  explained above, as it's actually an identical flow. Fixing this bug
  is a precondition to a couple of other fixes if we want backports to
  actually fix their respective issues on Juno.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1412542/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1442123] Re: iPXE: neutron chainloading undionly.kpxe is not working

2015-04-09 Thread Lucas Alvares Gomes
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => Lucas Alvares Gomes (lucasagomes)

** Changed in: ironic
   Status: New => Won't Fix

** Changed in: ironic
   Status: Won't Fix => Invalid

** No longer affects: ironic

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1442123

Title:
  iPXE: neutron chainloading undionly.kpxe is not working

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When sending the DHCP options to neutron[1] Ironic will pass something
  like:

   dhcp_opts.append({'opt_name': 'tag:!ipxe,bootfile-name', 'opt_value':
  CONF.pxe.pxe_bootfile_name})

  The problem is that the "ipxe" tag is not created anywhere in Neutron,
  so depending on the order that the configuration opts are written in
  the dnsmasq configuration file the machine may get stuck when booting.

  The idea of the "ipxe" tag is to indicate that the DHCP request didn't
  come from iPXE/gPXE, so the flow would be like:

  1- Client does a DHCP request, if it does not come from iPXE, dnsmasq will 
ACK telling it to get the undionly.kpxe binary via TFTP
  2- Client gets the undionly.kpxe and chainload it
  3- Client does a DHCP request that nows comes from iPXE, dnsmasq will ACK 
telling it to get the boot.ipxe script from via HTTP.

  ...

  How to reproduce:

  1- In a devstack environment enabled Ironic with iPXE setting 
"IRONIC_IPXE_ENABLED=True" in the local.conf.
  2- Do a nova boot to deploy a machine
  3- Now, to make it always appear (since it may fail or may succeed, depending 
on the order neutron will write the configuration options in the file and it's 
abritraty) you should move the option with the ipxe tag to be before the option 
pointing to the ipxe script. Something like:

  tag:tag0,option:router,10.1.0.1
  
tag:b76bb31e-44bc-4f97-bee8-d0f2cc62f05a,option:bootfile-name,http://192.168.122.156:8088/boot.ipxe
  
tag:b76bb31e-44bc-4f97-bee8-d0f2cc62f05a,tag:!ipxe,option:bootfile-name,undionly.kpxe
  tag:b76bb31e-44bc-4f97-bee8-d0f2cc62f05a,option:tftp-server,192.168.122.156
  
tag:b76bb31e-44bc-4f97-bee8-d0f2cc62f05a,option:server-ip-address,192.168.122.156
  tag:tag0,option:dns-server,10.1.0.2

  NOTE: The configuration file with the dhcp options will be
  /opt/stack/data/neutron/dhcp//opts

  4- send a HUP signal to the dnsmasq process to re-read the
  configuration files

  $ sudo kill -HUP 

  5- Reboot the machine, and the machine will get stuck when booting:

  Booting from ROM...
  iPXE (PCI 00:04.0) starting execution...ok
  iPXE initialising devices...ok

  iPXE 1.0.0+git-2013.c3d1e78-2ubuntu1.1 -- Open Source Network Boot 
Firmware
  ^[[51;239Ripxe.org
  Features: HTTP HTTPS iSCSI DNS TFTP AoE bzImage ELF MBOOT PXE PXEXT Menu

  net0: 52:54:00:72:7a:85 using 82540em on PCI00:04.0 (open)
    [Link:up, TX:0 TXE:0 RX:0 RXE:0]
  Configuring (net0 52:54:00:72:7a:85).. ok
  net0: 10.1.0.4/255.255.255.0 gw 10.1.0.1
  Next server: 192.168.122.156

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1442123/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1433417] Re: linuxbridge unit test regression

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1433417

Title:
  linuxbridge unit test regression

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  change-id I07d1d297f07857d216649cccf717896574aac301 changed 
IPWrapper.get_devices to
  use /sys instead of executing ip command.
  Unfortunately it broke linuxbridge unit tests, which seems to assume that
  mocking utils.execute is enough in some places.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1433417/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1434671] Re: MTU advertisement feature cleanup

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1434671

Title:
  MTU advertisement feature cleanup

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  This bug report is to track the cleanup required for spec [1] and
  discussion going on in [2]

  [1] 
http://specs.openstack.org/openstack/neutron-specs/specs/kilo/mtu-selection-and-advertisement.html
  [2] http://lists.openstack.org/pipermail/openstack-dev/2015-March/059406.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1434671/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1434103] Re: SQL schema downgrades are no longer supported

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1434103

Title:
  SQL schema downgrades are no longer supported

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Magnum - Containers for OpenStack:
  New
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Data Processing (Sahara):
  Fix Released

Bug description:
  Approved cross-project spec: https://review.openstack.org/152337

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1434103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1433552] Re: Refactor *aaS and L3 agent interaction

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1433552

Title:
  Refactor *aaS and L3 agent interaction

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  This bug is to do some cleanup refactoring of the *aaS and L3 agent
  interactions. It has several goals:

  1) Switch from the L3EventObservers to the CallbacksManager mechanism for all 
*aaS agents.
  2) Remove/reduce the dependency of *aaS agents on the L3 agent
  3) Eliminate the AdvancedService base class
  4) Eliminate the *(now) unused advanced_service.py and event_observer.py 
modules.

  This will reduce from having two separate mechanisms to manage
  callbacks from the L3 agent to the *aaS agents and their device
  drivers. (#1, #4)

  It also simplifies the code. Instead of the classes, like VPNService,
  from handling callbacks, forwarding calls to L3 agent to get router
  info (removed in previous refactoring), maintaining another copy of L3
  agent and config settings( #2), and loading device drivers, the
  classes can focus just on driver loading. Will no longer need a class
  hierarchy for this as well (#3)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1433552/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414937] Re: Default Route not added for IPv6 subnets in HA Router

2015-04-09 Thread Adam Gandelman
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Status: New => Fix Committed

** Changed in: neutron/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1414937

Title:
  Default Route not added for IPv6 subnets in HA Router

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  Fix Committed

Bug description:
  In an HA setup, keepalived would configure the default gateway (in the master 
HA router) by parsing the "virtual_routes" section of keepalived.conf file. In 
the current Neutron code, virtual_routes section is constructed keeping in view 
of IPv4 subnets.
  This has to be suitably enhanced to support IPv6 subnets.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1414937/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1400280] Re: Metering label rule creation improvements

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1400280

Title:
  Metering label rule creation improvements

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  When neutron user create metering label rule neutron-api fetches all labels 
from database, sends it to the queue, metering agent reads it and recreate all 
chains in iptables.
  It's not optimal and cause high load of neutron if you use labels/rules.

  I think we can send only new rule and add this rule in iptables chain.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1400280/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376169] Re: ODL MD can't reconnect to ODL after it restarts

2015-04-09 Thread Adam Gandelman
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Status: New => Fix Committed

** Changed in: neutron/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1376169

Title:
  ODL MD can't reconnect to ODL after it restarts

Status in OpenDaylight backend controller integration with Neutron:
  Fix Released
Status in OpenStack Neutron (virtual network service):
  New
Status in neutron juno series:
  Fix Committed

Bug description:
  ODL MD doesn't process any traitment when it receives 401 http error 
(Unauthorized) which happens after ODL restarts.
  The only way to recover is to restart neutron. 

  It induces a strongly link between restarts of neutron and ODL.

  To reproduce it:
   - start ODL and neutron
   - create a network
   - restart ODL
   - create another network

  The last command raises an 401 http error and any extra single
  operations will fail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-odl/+bug/1376169/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367999] Re: live-migration causes VM network disconnected forever

2015-04-09 Thread Adam Gandelman
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Importance: Undecided => High

** Changed in: neutron/juno
   Status: New => Fix Committed

** Changed in: neutron/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1367999

Title:
  live-migration causes VM network disconnected forever

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  Fix Committed

Bug description:
  OS: RHEL 6.5
  OpenStack: RDO icehouse and master
  Neutron: Linuxbridge + VxLAN + L2pop
  Testbed: 1 controller node + 2 compute nodes + 1 network node

  Reproduction procedure:

  1. Start to ping VM from qrouter namespace using fixed IP
  Start to ping VM from outside using floating IP

  2. Live-migrate the VM from compute1 to compute2

  3. VM Network disconnects after several seconds

  4. Even if Nova reports that the migration is finished,
  Ping is still not working.

  Debug Info on network node:

  Command: ['sudo', 'bridge', 'fdb', 'add', 'fa:16:3e:b3:fd:27', 'dev', 
'vxlan-1', 'dst', '192.168.2.103']
  Exit code: 2
  Stdout: ''
  Stderr: 'RTNETLINK answers: File exists\n'

  Cause:
  Before migration, the original fdb entry is there. After migration, l2pop 
will updates the fdb entry of the VM.
  It adds the new entry that causes ERROR.

  The right operation should be 'replace' not 'add'.

  By the way, 'replace' will safely add the new entry if old entry is
  not existed.

  I think this bug can be marked as High.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1367999/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416554] Re: l3 prevent port deletion doesn't handle missing ports

2015-04-09 Thread Adam Gandelman
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Importance: Undecided => Low

** Changed in: neutron/juno
   Status: New => Fix Committed

** Changed in: neutron/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1416554

Title:
  l3 prevent port deletion doesn't handle missing ports

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  Fix Committed

Bug description:
  The l3 code to check to see if a port can be deleted does not handle
  the case where the port id it is passed does not refer to a port that
  still exists. This throws an exception and makes the API behavior
  inconsistent when two concurrent requests come in to delete the same
  port.[1] This is inconsistent because if the port is concurrently
  deleted after the l3 check is done but before the delete attempt is
  made, no exception will be raised.[2]



  1. This happens frequently when horizon deletes the subnet and immediately 
deletes the network afterwards.  The dhcp agent will delete it's port on the 
subnet cleanup and may rip the port right before the delete_network call does 
it's auto port cleanup. The auto port cleanup will then hit a portnotfound 
exception which goes uncaught.
  2. 
https://github.com/openstack/neutron/blob/6a797f354eb4ba936b80603f7cc01a2fe80446fd/neutron/plugins/ml2/plugin.py#L1113

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1416554/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373816] Re: _get_security_groups_on_port tries to get [0] on a set type

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373816

Title:
  _get_security_groups_on_port tries to get [0] on a set type

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  _get_security_groups_on_port checks before that all security groups on port 
belong to tenant - and if there are any that don't fulfill this requirement it 
tries to raise SecurityGroupNotFound but fails with :
  TypeError: 'set' object does not support indexing

  port_sg_missing = requested_groups - valid_groups
  if port_sg_missing:
  raise ext_sg.SecurityGroupNotFound(id=str(port_sg_missing[0]))

  
  One thing is the fail itself - but beside I think that message = _("Security 
group %(id)s does not exist"), where id would be a randomly chosen missing id 
isn't really clear in this context and new exception should be created for this 
case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1373816/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1287824] Re: l3 agent makes too many individual sudo/ip netns calls

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1287824

Title:
  l3 agent makes too many individual sudo/ip netns calls

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  Basically, calls to sudo, root_wrap, and ip netns exec all add
  overhead that can make these calls very expensive.  Developing an
  effecting way of consolidating these calls in to considerably fewer
  calls will be a big win.  This assumes the mechanism for consolidating
  them does not itself add a lot of overhead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1287824/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1403068] Re: Tests fail with python 2.7.9

2015-04-09 Thread Adam Gandelman
** Changed in: cinder/juno
 Assignee: (unassigned) => Corey Bryant (corey.bryant)

** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Status: New => Fix Committed

** Changed in: neutron/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1403068

Title:
  Tests fail with python 2.7.9

Status in Cinder:
  Fix Released
Status in Cinder juno series:
  Fix Committed
Status in OpenStack Identity (Keystone):
  Fix Released
Status in Manila:
  Fix Committed
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  Fix Committed

Bug description:
  Tests that require SSL fail on python 2.7.9 due to the change in
  python uses SSL certificates.

  
  ==
  FAIL: cinder.tests.test_wsgi.TestWSGIServer.test_app_using_ipv6_and_ssl
  cinder.tests.test_wsgi.TestWSGIServer.test_app_using_ipv6_and_ssl
  --
  _StringException: Traceback (most recent call last):
File "/tmp/buildd/cinder-2014.2/cinder/tests/test_wsgi.py", line 237, in 
test_app_using_ipv6_and_ssl
  response = open_no_proxy('https://[::1]:%d/' % server.port)
File "/tmp/buildd/cinder-2014.2/cinder/tests/test_wsgi.py", line 47, in 
open_no_proxy
  return opener.open(*args, **kwargs)
File "/usr/lib/python2.7/urllib2.py", line 431, in open
  response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 449, in _open
  '_open', req)
File "/usr/lib/python2.7/urllib2.py", line 409, in _call_chain
  result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1240, in https_open
  context=self._context)
File "/usr/lib/python2.7/urllib2.py", line 1197, in do_open
  raise URLError(err)
  URLError: 
  Traceback (most recent call last):
  _StringException: Empty attachments:
stderr
stdout

  Traceback (most recent call last):
File "/tmp/buildd/cinder-2014.2/cinder/tests/test_wsgi.py", line 237, in 
test_app_using_ipv6_and_ssl
  response = open_no_proxy('https://[::1]:%d/' % server.port)
File "/tmp/buildd/cinder-2014.2/cinder/tests/test_wsgi.py", line 47, in 
open_no_proxy
  return opener.open(*args, **kwargs)
File "/usr/lib/python2.7/urllib2.py", line 431, in open
  response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 449, in _open
  '_open', req)
File "/usr/lib/python2.7/urllib2.py", line 409, in _call_chain
  result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1240, in https_open
  context=self._context)
File "/usr/lib/python2.7/urllib2.py", line 1197, in do_open
  raise URLError(err)
  URLError: 

  Traceback (most recent call last):
  _StringException: Empty attachments:
stderr
stdout

  Traceback (most recent call last):
File "/tmp/buildd/cinder-2014.2/cinder/tests/test_wsgi.py", line 237, in 
test_app_using_ipv6_and_ssl
  response = open_no_proxy('https://[::1]:%d/' % server.port)
File "/tmp/buildd/cinder-2014.2/cinder/tests/test_wsgi.py", line 47, in 
open_no_proxy
  return opener.open(*args, **kwargs)
File "/usr/lib/python2.7/urllib2.py", line 431, in open
  response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 449, in _open
  '_open', req)
File "/usr/lib/python2.7/urllib2.py", line 409, in _call_chain
  result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1240, in https_open
  context=self._context)
File "/usr/lib/python2.7/urllib2.py", line 1197, in do_open
  raise URLError(err)
  URLError: 

  
  ==
  FAIL: cinder.tests.test_wsgi.TestWSGIServer.test_app_using_ssl
  cinder.tests.test_wsgi.TestWSGIServer.test_app_using_ssl
  --
  _StringException: Traceback (most recent call last):
File "/tmp/buildd/cinder-2014.2/cinder/tests/test_wsgi.py", line 212, in 
test_app_using_ssl
  response = open_no_proxy('https://127.0.0.1:%d/' % server.port)
File "/tmp/buildd/cinder-2014.2/cinder/tests/test_wsgi.py", line 47, in 
open_no_proxy
  return opener.open(*args, **kwargs)
File "/usr/lib/python2.7/urllib2.py", line 431, in open
  response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 449, in _open
  '_open', req)
File "/usr/lib/python2.7/urllib2.py", line 409, in _call_chain
  result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1240, in https_open
  context=self._context)
File "/usr/lib/python2.7/urllib2.py", line 1197, in do_open
  raise URLError(err)
  URLError: 
  Traceback (most recent call last):
  _StringEx

[Yahoo-eng-team] [Bug 1441347] Re: test_ensure_dir_*_exist fails randomly

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1441347

Title:
  test_ensure_dir_*_exist fails randomly

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  Tests fail with:

  ft1.579: 
neutron.tests.unit.agent.linux.test_utils.TestBaseOSUtils.test_ensure_dir_exist_StringException:
 Empty attachments:
pythonlogging:''
pythonlogging:'neutron.api.extensions'
stderr
stdout

  Traceback (most recent call last):
File 
"/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py",
 line 1201, in patched
  return func(*args, **keywargs)
File "neutron/tests/unit/agent/linux/test_utils.py", line 236, in 
test_ensure_dir_exist
  isdir.assert_called_once_with('/the')
File 
"/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py",
 line 845, in assert_called_once_with
  raise AssertionError(msg)
  AssertionError: Expected to be called once. Called 0 times.

  The logstash query:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiaW4gdGVzdF9lbnN1cmVfZGlyX25vdF9leGlzdFwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDI4NDM5NTU2MzA4fQ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1441347/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439817] Re: IP set full error in kernel log

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1439817

Title:
  IP set full error in kernel log

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  This is appearing in some logs upstream:
  http://logs.openstack.org/73/170073/1/experimental/check-tempest-dsvm-
  neutron-full-non-
  isolated/ac882e3/logs/kern_log.txt.gz#_Apr__2_13_03_06

  And it has also been reported by andreaf in IRC as having been
  observed downstream.

  Logstash is not very helpful as this manifests only with a job currently in 
the experimental queue.
  As said job runs in non-isolated mode, accruing of elements in the IPset 
until it reaches saturation is onet things that might need to be investigated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1439817/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1438329] Re: Example configuration files lack changes for Kilo

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1438329

Title:
  Example configuration files lack changes for Kilo

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  The example configuration files in the source repository lack the
  changes in the list of new, updated, and deprecated options in the
  Kilo configuration reference guide [1]. For example, the example
  neutron.conf file [2] lacks the [nova] and [oslo_messaging_*] sections
  and options.

  [1] 
http://docs.openstack.org/trunk/config-reference/content/neutron-conf-changes-kilo.html
  [2] https://github.com/openstack/neutron/blob/master/etc/neutron.conf

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1438329/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1434601] Re: Incorrect usage default in migration 1955efc66455

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1434601

Title:
  Incorrect usage default in migration 1955efc66455

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  Migration 1955efc66455_weight_scheduler adds column with  'default'
  parameter. 'default' is useless in migration, to provide default value
  in db should be used server_default instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1434601/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1422476] Re: floating ip scheduled to wrong router

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1422476

Title:
  floating ip scheduled to wrong router

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  I have a tenant network, two external networks, two routers (each one
  has gateway set to one of the external networks, and one port on the
  tenant network) and floating ip's on each external network.

  In icehouse, this worked fine. the floating ip for each network was
  attached to the correct router. After upgrading to RDO Juno, I'm
  seeing both sets of floating ip's getting assigned to the same router:

  [root@cloud ~]# ip netns exec qrouter-209158a6-ee00-405f-b929-7cb386460d94 ip 
a
  1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host 
 valid_lft forever preferred_lft forever
  44: qr-ffaaacc1-06:  mtu 1500 qdisc noqueue 
state UNKNOWN 
  link/ether fa:16:3e:5c:e1:58 brd ff:ff:ff:ff:ff:ff
  inet 192.168.127.1/24 brd 192.168.127.255 scope global qr-ffaaacc1-06
 valid_lft forever preferred_lft forever
  inet6 fe80::f816:3eff:fe5c:e158/64 scope link 
 valid_lft forever preferred_lft forever
  53: qg-1a260edc-41:  mtu 1500 qdisc noqueue 
state UNKNOWN 
  link/ether fa:16:3e:81:dc:f7 brd ff:ff:ff:ff:ff:ff
  inet 192.101.107.185/25 brd 192.101.107.255 scope global qg-1a260edc-41
 valid_lft forever preferred_lft forever
  inet 192.168.122.179/32 brd 192.168.122.179 scope global qg-1a260edc-41
 valid_lft forever preferred_lft forever
  inet 192.168.122.128/32 brd 192.168.122.128 scope global qg-1a260edc-41
 valid_lft forever preferred_lft forever
  inet 192.101.107.171/32 brd 192.101.107.171 scope global qg-1a260edc-41
 valid_lft forever preferred_lft forever
  inet 192.101.107.181/32 brd 192.101.107.181 scope global qg-1a260edc-41
 valid_lft forever preferred_lft forever
  inet 192.101.107.180/32 brd 192.101.107.180 scope global qg-1a260edc-41
 valid_lft forever preferred_lft forever
  inet 192.101.107.179/32 brd 192.101.107.179 scope global qg-1a260edc-41
 valid_lft forever preferred_lft forever
  inet6 fe80::f816:3eff:fe81:dcf7/64 scope link 
 valid_lft forever preferred_lft forever

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1422476/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421105] Re: L2 population sometimes failed with multiple neutron-server

2015-04-09 Thread Adam Gandelman
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Status: New => Fix Committed

** Changed in: neutron/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1421105

Title:
  L2 population sometimes failed with multiple neutron-server

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  Fix Committed

Bug description:
  In my environment with two neutron-server, 'mechanism_drivers' is 
openvswitch, l2 population is set.
  When I delete a VM which is the network-A  last VM in compute node-A, I found 
a KeyError in  compute node-B openvswitch-agent log, it throws by 
'del_fdb_flow':

  def del_fdb_flow(self, br, port_info, remote_ip, lvm, ofport):
  if port_info == q_const.FLOODING_ENTRY:
  lvm.tun_ofports.remove(ofport)
  if len(lvm.tun_ofports) > 0:
  ofports = _ofport_set_to_str(lvm.tun_ofports)
  br.mod_flow(table=constants.FLOOD_TO_TUN,
  dl_vlan=lvm.vlan,
  actions="strip_vlan,set_tunnel:%s,output:%s" %
  (lvm.segmentation_id, ofports))

  the reason is that openvswitch-agent  receives two RPC request
  'fdb_remove', why it receives twice, I think the reason is that:

  there are two neutron-server: neutron-serverA, neutron-serverB, one compute 
node-A
  1. nova delete VM which is in compute node-A, it will firstly delete the TAP 
device, then the ovs scans the port is deleted, it send RPC request 
'update_device_down' to  neutron-serverA, when neutron-serverA receive this 
request, l2 population will firstly send 'fdb_remove'
  2. after nova delete the TAP device, it send REST API request 'delete_port' 
to neutron-serveB, the l2 population send second 'fdb_remove' RPC request
  when ovs agent receive the second  'fdb_remove', it del_fdb_flow, the 
'lvm.tun_ofports.remove(ofport)' throw KeyError, because 
  the ofport is deleted in first request

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1421105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1417633] Re: OVS Agent should fail when enable_distributed_routing = True and l2_population = False

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1417633

Title:
  OVS Agent should fail when enable_distributed_routing = True and
  l2_population = False

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  l2_population is required for DVR both as a mechanism driver and as an option 
in OVS Agent's configuration.
  The agent should fail to start when enable_distributed_routing = True and 
l2_population = False otherwise the router won't behave as expected.

  Version
  ==
  RHEL 7.0
  openstack-neutron-2014.2.1-6.el7ost.noarch
  openstack-neutron-ml2-2014.2.1-6.el7ost.noarch
  python-neutron-2014.2.1-6.el7ost.noarch
  openstack-neutron-openvswitch-2014.2.1-6.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1417633/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385246] Re: Catch specific exceptions for _get_instance_nw_info

2015-04-09 Thread Adam Gandelman
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Importance: Undecided => Low

** Changed in: nova/juno
   Status: New => Fix Committed

** Changed in: nova/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1385246

Title:
  Catch specific exceptions for _get_instance_nw_info

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Committed

Bug description:
  Occasionally see the following logs:

  2014-10-19 17:29:54.170 27466 ERROR nova.compute.manager [-] [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] An error occurred while refreshing the 
network cache.
  2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] Traceback (most recent call last):
  2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1]   File 
"/opt/stack/nova/nova/compute/manager.py", line 5327, in 
_heal_instance_info_cache
  2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] self._get_instance_nw_info(context, 
instance, use_slave=True)
  2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1]   File 
"/opt/stack/nova/nova/compute/manager.py", line 1233, in _get_instance_nw_info
  2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] instance)
  2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1]   File 
"/opt/stack/nova/nova/network/api.py", line 48, in wrapped
  2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] return func(self, context, *args, 
**kwargs)
  2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1]   File 
"/opt/stack/nova/nova/network/api.py", line 374, in get_instance_nw_info
  2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] result = 
self._get_instance_nw_info(context, instance)
  2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1]   File 
"/opt/stack/nova/nova/network/api.py", line 390, in _get_instance_nw_info
  2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] nw_info = 
self.network_rpcapi.get_instance_nw_info(context, **args)
  2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1]   File 
"/opt/stack/nova/nova/network/rpcapi.py", line 242, in get_instance_nw_info
  2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] host=host, project_id=project_id)
  2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1]   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", line 
152, in call
  2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] retry=self.retry)
  2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1]   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/transport.py", line 90, 
in _send
  2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] timeout=timeout, retry=retry)
  2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1]   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", 
line 408, in send
  2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] retry=retry)
  2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1]   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", 
line 399, in _send
  2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] raise result
  2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] InstanceNotFound_Remote: Instance 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1 could not be found.
  2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] Traceback (most recent call last):
  2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] 
  2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-9

[Yahoo-eng-team] [Bug 1423213] Re: ipsec site connection should be set to ERROR state if the peer address is fqdn and cannot be resolved

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1423213

Title:
  ipsec site connection should be set to ERROR state if the peer address
  is fqdn and cannot be resolved

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  When creating ipsec site connection, if the peer address provided is a
  fqdn and is not resolved, then the status of the ipsec site connection
  will be in PENDING_CREATE. It should be set to ERROR state.

  This bug is follow up of the bug
  https://bugs.launchpad.net/neutron/+bug/1405413.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1423213/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399127] Re: Hyper-V: copy_vm_console_logs does not behave as expected

2015-04-09 Thread Adam Gandelman
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Importance: Undecided => Low

** Changed in: nova/juno
   Status: New => Fix Committed

** Changed in: nova/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1399127

Title:
  Hyper-V: copy_vm_console_logs does not behave as expected

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Committed

Bug description:
  The method nova.virt.hyperv.vmops.VMOps.copy_vm_console_logs does not
  behave as expected. For example,  it should copy the local files
  'local.file', 'local.file.1' to the remote locations 'remote.file',
  'remote.file.1' respectively. Instead it copies 'local.file' to
  'local.file.1' and 'remote.file' to 'remote.file.1'.

  This issue was discovered while creating unit tests:
  https://review.openstack.org/#/c/138934/

  Trace:

  2014-12-04 08:25:51.623 | Traceback (most recent call last):
  2014-12-04 08:25:51.624 |   File 
"nova/tests/unit/virt/hyperv/test_vmops.py", line 868, in 
test_copy_vm_console_logs
  2014-12-04 08:25:51.624 | mock.sentinel.FAKE_PATH, 
mock.sentinel.FAKE_REMOTE_PATH)
  2014-12-04 08:25:51.624 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py",
 line 846, in assert_called_once_with
  2014-12-04 08:25:51.624 | return self.assert_called_with(*args, 
**kwargs)
  2014-12-04 08:25:51.625 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py",
 line 835, in assert_called_with
  2014-12-04 08:25:51.625 | raise AssertionError(msg)
  2014-12-04 08:25:51.625 | AssertionError: Expected call: 
copy(sentinel.FAKE_PATH, sentinel.FAKE_REMOTE_PATH)
  2014-12-04 08:25:51.626 | Actual call: copy(sentinel.FAKE_PATH, 
sentinel.FAKE_PATH_ARCHIVED)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1399127/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1442322] [NEW] [Launch Instance Fix] Remove unnecessary class in config step

2015-04-09 Thread Kelly Domico
Public bug reported:

Remove the "title" class in the configuration step

** Affects: horizon
 Importance: Undecided
 Assignee: Chris Johnson (wchrisjohnson)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1442322

Title:
  [Launch Instance Fix] Remove unnecessary class in config step

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Remove the "title" class in the configuration step

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1442322/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368989] Re: service_update() should not set an RPC timeout longer than service.report_interval

2015-04-09 Thread Adam Gandelman
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Importance: Undecided => Medium

** Changed in: nova/juno
   Status: New => Fix Committed

** Changed in: nova/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368989

Title:
  service_update() should not set an RPC timeout longer than
  service.report_interval

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Committed

Bug description:
  nova.servicegroup.drivers.db.DbDriver._report_state() is called every
  service.report_interval seconds from a timer in order to periodically
  report the service state.  It calls
  self.conductor_api.service_update().

  If this ends up calling
  nova.conductor.rpcapi.ConductorAPI.service_update(), it will do an RPC
  call() to nova-conductor.

  If anything happens to the RPC server (failover, switchover, etc.) by
  default the RPC code will wait 60 seconds for a response (blocking the
  timer-based calling of _report_state() in the meantime).  This is long
  enough to cause the status in the database to get old enough that
  other services consider this service to be "down".

  Arguably, since we're going to call service_update( ) again in
  service.report_interval seconds there's no reason to wait the full 60
  seconds.  Instead, it would make sense to set the RPC timeout for the
  service_update() call to to something slightly less than
  service.report_interval seconds.

  I've also submitted a related bug report
  (https://bugs.launchpad.net/bugs/1368917) to improve RPC loss of
  connection in general, but I expect that'll take a while to deal with
  while this particular case can be handled much more easily.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1368989/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376013] Re: Error about rfp/fpr veth in log when restarting l3 agent in DVR mode

2015-04-09 Thread Adam Gandelman
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Importance: Undecided => Medium

** Changed in: neutron/juno
   Status: New => Fix Committed

** Changed in: neutron/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1376013

Title:
  Error about rfp/fpr veth in log when restarting l3 agent in DVR mode

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  Fix Committed

Bug description:
  2014-09-30 21:14:14.636 ERROR neutron.agent.linux.utils [-] 
  Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-2d8e2ae6-4180-4e43-953d-931189c0a5ae', 'ip', 'link', 'add', 
'rfp-2d8e2ae6-4', 'type', 'veth', 'peer', 'name', 'fpr-2d8e2ae6-4', 'netns', 
'fip-be1a07de-9d7b-4823-8d01-3091773d794b']
  Exit code: 2
  Stdout: ''
  Stderr: 'RTNETLINK answers: File exists\n'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1376013/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1411137] Re: Cann't use extra-dhcp-opt to set networking parameters on stateless-dhcp network

2015-04-09 Thread Adam Gandelman
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Status: New => Fix Committed

** Changed in: neutron/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1411137

Title:
  Cann't use extra-dhcp-opt to set networking parameters on stateless-
  dhcp network

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  Fix Committed

Bug description:
  when using slaac+stateless dhcpv6 to configure ipv6 address and
  networking configurations on OpenStack, if the network only consist
  one ipv6 subnet with ipv6-address-mode setted as stateless-dhcpv6,the
  dhcp agent will filter this subnet in its _iter_hosts() function, at
  this time,  even you use extr-dhcp-opt to set network configuration on
  ports of this subnet, they will  not be included in the host file of
  the dnsmasq, and can not be got through dhcpv6 protocal. The stateless
  dhcpv6  will fail on this situation!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1411137/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1419754] Re: linuxbridge+Xen: tap devices not detected by neutron-linuxbridge-agent

2015-04-09 Thread Adam Gandelman
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Status: New => Fix Committed

** Changed in: neutron/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1419754

Title:
  linuxbridge+Xen: tap devices not detected by neutron-linuxbridge-agent

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  Fix Committed

Bug description:
  When using Xen as hypervisor and the linuxbridge ml2 plugin, the 
neutron-linuxbridge-agent does not detect added/removed/changed tap devices.
  The reason seems that the agent only searches in /sys/devices/virtual/net for 
tap devices. In the Xen case, the tap devices don't popup in this directory. 
The devices are available in /sys/class/net . Also according to the kernel ABI 
doc (see https://www.kernel.org/doc/Documentation/ABI/testing/sysfs-devices ), 
/sys/devices should not be used. Instead /sys/class should be used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1419754/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1407886] Re: FWaaS, VPNaaS - can not control policy using policy.json

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1407886

Title:
  FWaaS, VPNaaS - can not control policy using policy.json

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  Some neutron resource can not apply the policy control using policy.json when
  create/update/delete the resource.

  Following resources can not apply the policy control:

  * firewall_policy
  * ipsec_policy
  * ikepolicy

  This bug occurs the following case(example):

  If the administrator tries to prohibit the general user from specifying the
  "shared" attribute in creating the resource, but he can't.

  How to reproduce:

  # grep create_firewall_policy /etc/neutron/policy.json
  "create_firewall_policy:shared": "rule:context_is_admin",

  # source keystonerc_demo #change to general user
  # neutron firewall-policy-create --shared foo
  Created a new firewall_policy:
  ++--+
  | Field  | Value|
  ++--+
  | audited| False|
  | description|  |
  | firewall_rules |  |
  | id | ab688173-72dc-4032-aa90-5d75e2529830 |
  | name   | foo  |
  | shared | True |
  | tenant_id  | 0cf9279d4de346fc83ac297a289a79c6 |
  ++--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1407886/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1403270] Re: "import nova.hacking.checks.factory" now requires eventlet

2015-04-09 Thread Joe Gordon
released in 0.10

** Changed in: hacking
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1403270

Title:
  "import nova.hacking.checks.factory" now requires eventlet

Status in OpenStack Hacking Guidelines:
  Fix Released
Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  As of https://review.openstack.org/#/c/140146/2 importing
  nova.hacking.checks now requires eventlet

  This becomes an issue if we want to run flake8 in a venv without
  eventlet.  We do this as part of the hacking integration test, it
  builds a venv (via tox) with trunk hacking and runs flake8 on other
  repositories to help check for issues.

  A few possible ways to resolve this:

  * revert https://review.openstack.org/#/c/140146/2
  * Add eventlet to the hacking integration test
  * move hacking checks outside of nova/*

  For now going with the second option.

  Stacktrace: http://logs.openstack.org/52/134052/5/check//gate-hacking-
  integration-nova/c4d107b/console.html#_2014-12-16_22_38_03_070


  >>> import nova.hacking.checks.factory
  Traceback (most recent call last):
File "", line 1, in 
File "nova/__init__.py", line 30, in 
  import eventlet  # noqa
  ImportError: No module named eventlet

To manage notifications about this bug go to:
https://bugs.launchpad.net/hacking/+bug/1403270/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1442174] [NEW] can not print request_id in some logs when booting an instance

2015-04-09 Thread Hua Wang
Public bug reported:

nova keeps the RequestConext in threading.local, so it can print the
request_id in the logs. But when booting an instance, it spawns a new
greenthread to build the instance. The RequestContext is not kept in the
threading.local of the new greenthread, so we can not print the
request_id in the logs.

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  nova keeps the RequestConext in threading.local, so it can print the
  request_id in the logs. But when booting an instance, it spawns a new
  greenthread to build the instance. The RequestContext is not kept in the
- threading.local of the new greenthread, so we cann't print the
+ threading.local of the new greenthread, so we can not print the
  request_id in the logs.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1442174

Title:
  can not print request_id in some logs when booting an instance

Status in OpenStack Compute (Nova):
  New

Bug description:
  nova keeps the RequestConext in threading.local, so it can print the
  request_id in the logs. But when booting an instance, it spawns a new
  greenthread to build the instance. The RequestContext is not kept in
  the threading.local of the new greenthread, so we can not print the
  request_id in the logs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1442174/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1442272] [NEW] functional.agent.test_ovs_flows.ARPSpoofTestCase.test_arp_spoof_disable_port_security fails

2015-04-09 Thread Eugene Nikanorov
Public bug reported:

Logstash query:

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIlNraXBwaW5nIEFSUCBzcG9vZmluZyBydWxlcyBmb3IgcG9ydFwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI0MzIwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0Mjg1OTc0OTExMTN9


 Captured pythonlogging:
2015-04-09 16:18:32.470 | 2015-04-09 16:18:32.414 | ~~~
2015-04-09 16:18:32.470 | 2015-04-09 16:18:32.416 | 2015-04-09 16:18:04,042 
INFO [neutron.plugins.openvswitch.agent.ovs_neutron_agent] Skipping ARP 
spoofing rules for port 'test-port202660' because it has port security disabled
2015-04-09 16:18:32.470 | 2015-04-09 16:18:32.417 | 2015-04-09 16:18:05,359 
   ERROR [neutron.agent.linux.utils] 
2015-04-09 16:18:32.471 | 2015-04-09 16:18:32.419 | Command: ['ip', 
'netns', 'exec', 'func-89a1f22f-b789-4b12-a70c-0f8dde1baf42', 'ping', '-c', 1, 
'-W', 1, '192.168.0.2']
2015-04-09 16:18:32.471 | 2015-04-09 16:18:32.420 | Exit code: 1
2015-04-09 16:18:32.471 | 2015-04-09 16:18:32.422 | Stdin: 
2015-04-09 16:18:32.471 | 2015-04-09 16:18:32.423 | Stdout: PING 
192.168.0.2 (192.168.0.2) 56(84) bytes of data.
2015-04-09 16:18:32.471 | 2015-04-09 16:18:32.425 | 
2015-04-09 16:18:32.472 | 2015-04-09 16:18:32.426 | --- 192.168.0.2 ping 
statistics ---
2015-04-09 16:18:32.472 | 2015-04-09 16:18:32.428 | 1 packets transmitted, 
0 received, 100% packet loss, time 0ms
2015-04-09 16:18:32.472 | 2015-04-09 16:18:32.429 | 
2015-04-09 16:18:32.666 | 2015-04-09 16:18:32.431 | 
2015-04-09 16:18:32.667 | 2015-04-09 16:18:32.432 | Stderr: 
2015-04-09 16:18:32.667 | 2015-04-09 16:18:32.439 | 
2015-04-09 16:18:32.668 | 2015-04-09 16:18:32.440 | 
2015-04-09 16:18:32.669 | 2015-04-09 16:18:32.442 | Captured traceback:
2015-04-09 16:18:32.779 | 2015-04-09 16:18:32.443 | ~~~
2015-04-09 16:18:32.779 | 2015-04-09 16:18:32.445 | Traceback (most recent 
call last):
2015-04-09 16:18:32.779 | 2015-04-09 16:18:32.447 |   File 
"neutron/tests/functional/agent/test_ovs_flows.py", line 79, in 
test_arp_spoof_disable_port_security
2015-04-09 16:18:32.780 | 2015-04-09 16:18:32.448 | 
pinger.assert_ping(self.dst_addr)
2015-04-09 16:18:32.780 | 2015-04-09 16:18:32.450 |   File 
"neutron/tests/functional/agent/linux/helpers.py", line 113, in assert_ping
2015-04-09 16:18:32.780 | 2015-04-09 16:18:32.451 | 
self._ping_destination(dst_ip)
2015-04-09 16:18:32.780 | 2015-04-09 16:18:32.453 |   File 
"neutron/tests/functional/agent/linux/helpers.py", line 110, in 
_ping_destination
2015-04-09 16:18:32.781 | 2015-04-09 16:18:32.454 | '-W', 
self._timeout, dest_address])
2015-04-09 16:18:32.781 | 2015-04-09 16:18:32.456 |   File 
"neutron/agent/linux/ip_lib.py", line 580, in execute
2015-04-09 16:18:32.781 | 2015-04-09 16:18:32.457 | 
extra_ok_codes=extra_ok_codes, **kwargs)
2015-04-09 16:18:32.781 | 2015-04-09 16:18:32.459 |   File 
"neutron/agent/linux/utils.py", line 137, in execute
2015-04-09 16:18:32.781 | 2015-04-09 16:18:32.461 | raise 
RuntimeError(m)
2015-04-09 16:18:32.782 | 2015-04-09 16:18:32.462 | RuntimeError: 
2015-04-09 16:18:32.782 | 2015-04-09 16:18:32.464 | Command: ['ip', 
'netns', 'exec', 'func-89a1f22f-b789-4b12-a70c-0f8dde1baf42', 'ping', '-c', 1, 
'-W', 1, '192.168.0.2']
2015-04-09 16:18:32.782 | 2015-04-09 16:18:32.465 | Exit code: 1
2015-04-09 16:18:32.782 | 2015-04-09 16:18:32.467 | Stdin: 
2015-04-09 16:18:32.782 | 2015-04-09 16:18:32.469 | Stdout: PING 
192.168.0.2 (192.168.0.2) 56(84) bytes of data.
2015-04-09 16:18:32.782 | 2015-04-09 16:18:32.470 | 
2015-04-09 16:18:32.783 | 2015-04-09 16:18:32.472 | --- 192.168.0.2 ping 
statistics ---
2015-04-09 16:18:32.783 | 2015-04-09 16:18:32.473 | 1 packets transmitted, 
0 received, 100% packet loss, time 0ms

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: functional-tests gate-failure

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1442272

Title:
  
functional.agent.test_ovs_flows.ARPSpoofTestCase.test_arp_spoof_disable_port_security
  fails

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  Logstash query:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIlNraXBwaW5nIEFSUCBzcG9vZmluZyBydWxlcyBmb3IgcG9ydFwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI0MzIwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0Mjg1OTc0OTExMTN9

  
   Captured pythonlogging:
  2015-04-09 16:18:32.470 | 2015-04-09 16:18:32.414 | ~~~
  2015-04-09 16:18:32.470 | 2015-04-09 16:18:32.416 | 2015-04-09 
16:18:04,042 INFO [neutron.plugins.openv

[Yahoo-eng-team] [Bug 1415523] Re: Hyper-V agent: security groups issue

2015-04-09 Thread Adam Gandelman
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Importance: Undecided => High

** Changed in: neutron/juno
   Status: New => Fix Committed

** Changed in: neutron/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1415523

Title:
  Hyper-V agent: security groups issue

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  Fix Committed

Bug description:
  After this patch If19be8579ca734a899cdd673c919eee8165aaa0e refactored
  securitygroups_rpc, prepare_devices_filter  attempts to use methods
  unimplemented by the HyperV security groups driver.

  For this reason, binding ports fails with NotImplementedError if
  security groups are enabled.

  Trace: http://paste.openstack.org/show/163203/

  Until the HyperV security groups driver reaches parity, the
  _use_enhanced_rpc flag should be set to false on the
  HyperVSecurityAgent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1415523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1426383] Re: Listing security-groups takes long time

2015-04-09 Thread Adam Gandelman
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Status: New => Fix Committed

** Changed in: neutron/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1426383

Title:
  Listing security-groups takes long time

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  Fix Committed

Bug description:
  If we have a large number of security groups (more than 1000) with
  security group rules (about 100 for each group) listing them could
  take rather long time(more than 1 minute). Analysis shows that listing
  can be made on 15% faster(with larger number of security  groups can
  be 2-3 times faster) by adding lazy join to backref to
  SecurityGroupRule model.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1426383/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1423571] Re: l3 agent: need to improve exception handling in _process_router_update()

2015-04-09 Thread Adam Gandelman
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Importance: Undecided => Medium

** Changed in: neutron/juno
   Status: New => Fix Committed

** Changed in: neutron/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1423571

Title:
  l3 agent: need to improve exception handling in
  _process_router_update()

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  Fix Committed

Bug description:
  in _process_router_update() where _process_router_if_compatible()
  method is called only RouterNotCompatibleWithAgent exception is
  handled. In case any other (intermittent) exception happens inside
  _process_router_if_compatible() (in my case it was MessagingTimeout on
  fetching external net from server) it results in a situation where
  agent completely forgets about the router and continues working as
  usual while server shows router hosted by agent.

  devstack$ neutron l3-agent-list-hosting-router router1
  +--+--++---+
  | id   | host | admin_state_up | alive |
  +--+--++---+
  | 51cdbbf9-d160-4adf-9d9c-d720b96c79cc | devstack | True   | :-)   |
  +--+--++---+
  devstack$ neutron agent-show 51cdbbf9-d160-4adf-9d9c-d720b96c79cc
  
+-+---+
  | Field   | Value 
|
  
+-+---+
  | admin_state_up  | True  
|
  | agent_type  | L3 agent  
|
  | alive   | True  
|
  | binary  | neutron-l3-agent  
|
  | configurations  | { 
|
  | |  "router_id": "", 
|
  | |  "agent_mode": "legacy",  
|
  | |  "gateway_external_network_id": "",   
|
  | |  "handle_internal_only_routers": true,
|
  | |  "use_namespaces": true,  
|
  | |  "routers": 0,
|
  | |  "interfaces": 0, 
|
  | |  "floating_ips": 0,   
|
  | |  "interface_driver": 
"neutron.agent.linux.interface.OVSInterfaceDriver",  |
  | |  "external_network_bridge": "br-ex",  
|
  | |  "ex_gw_ports": 0 
|
  | | } 
|
  | created_at  | 2015-02-19 12:44:56   
|
  | description |   
|
  | heartbeat_timestamp | 2015-02-19 13:27:59   
|
  | host| devstack  
|
  | id  | 51cdbbf9-d160-4adf-9d9c-d720b96c79cc  
|
  | started_at  | 2015-02-19 13:26:59   
|
  | topic   | l3_agent  
|
  
+-+---+

  Need to catch broader exception there and set fullsync = True like
  it's done earlier in _process_router_update() when getting routers
  from server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1423571/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.n

[Yahoo-eng-team] [Bug 1380701] Re: create project validates quota usage against the current project

2015-04-09 Thread Adam Gandelman
** Also affects: horizon/juno
   Importance: Undecided
   Status: New

** Changed in: horizon/juno
   Importance: Undecided => Medium

** Changed in: horizon/juno
   Status: New => Fix Committed

** Changed in: horizon/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1380701

Title:
  create project validates quota usage against the current project

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Committed

Bug description:
  When creating a new project under Admin->Identity->Create project, it
  seems that validation for quota values provided by the new project are
  validated against the usage of the current project.  I've verified
  this behavior for volumes, instances, and virtual CPUs.

  To see the problem.
  * create 5 instances in the current project
  * create a new project (Admin->Identity->Create project) and on the Quotas 
tab, set the new projects Instance quota to 4.  
  On create this message will be displayed:
  Quota value(s) cannot be less than the current usage value(s): 5 Instances 
used

  There is no reason the *current* usage should be compared to a *new*
  project.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1380701/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376227] Re: Always use shades of blue for distribution pie charts

2015-04-09 Thread Adam Gandelman
** Also affects: horizon/juno
   Importance: Undecided
   Status: New

** Changed in: horizon/juno
   Importance: Undecided => Low

** Changed in: horizon/juno
   Status: New => Fix Committed

** Changed in: horizon/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1376227

Title:
  Always use shades of blue for distribution pie charts

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Committed

Bug description:
  It was suggested that red and orange colors can indicated problems and
  therefore should not be used in distribution pie charts. We should
  instead always use different shades of blue for this type of pie
  charts.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1376227/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1442043] [NEW] Brocade Vyatta Firewall feature impacted by L3 agent refactor

2015-04-09 Thread vishwanath jayaraman
Public bug reported:

The patch set https://review.openstack.org/#/c/163222/ related to L3
restructure that got merged removed the "process_router()" method that
the VyattaFirewallAgent class was using. The impact is that firewall is
not updated or applied on the Vyatta VRouter as expected when an end
user executes "Set Gateway" or "add/remove interfaces" method  on the
router.

** Affects: neutron
 Importance: Undecided
 Assignee: vishwanath jayaraman (vishwanathj)
 Status: New


** Tags: neutron-fwaas

** Changed in: neutron
 Assignee: (unassigned) => vishwanath jayaraman (vishwanathj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1442043

Title:
  Brocade Vyatta Firewall feature impacted by L3 agent refactor

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The patch set https://review.openstack.org/#/c/163222/ related to L3
  restructure that got merged removed the "process_router()" method that
  the VyattaFirewallAgent class was using. The impact is that firewall
  is not updated or applied on the Vyatta VRouter as expected when an
  end user executes "Set Gateway" or "add/remove interfaces" method  on
  the router.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1442043/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1436674] Re: "BOOLEAN" data type is not supported in db2, hits error during db migration

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1436674

Title:
"BOOLEAN" data type is not supported in db2, hits error during db
  migration

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  
  /usr/bin/neutron-db-manage --config-file /etc/neutron/neutron.conf 
--config-file /etc/neutron/plugin.ini --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini upgrade head

  INFO  [alembic.migration] Running upgrade f15b1fb526dd -> 341ee8a4ccb5, sync 
with cisco repo
  INFO  [alembic.migration] Running upgrade 341ee8a4ccb5 -> 35a0f3365720, add 
port-security in ml2
  Traceback (most recent call last):
File "/usr/bin/neutron-db-manage", line 10, in 
  sys.exit(main())
File "/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py", line 
238, in main
  CONF.command.func(config, CONF.command.name)
File "/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py", line 
106, in do_upgrade
  do_alembic_command(config, cmd, revision, sql=CONF.command.sql)
File "/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py", line 
72, in do_alembic_command
  getattr(alembic_command, cmd)(config, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/alembic/command.py", line 165, in 
upgrade
  script.run_env()
File "/usr/lib/python2.7/site-packages/alembic/script.py", line 382, in 
run_env
  util.load_python_file(self.dir, 'env.py')
File "/usr/lib/python2.7/site-packages/alembic/util.py", line 241, in 
load_python_file
  module = load_module_py(module_id, path)
File "/usr/lib/python2.7/site-packages/alembic/compat.py", line 79, in 
load_module_py
  mod = imp.load_source(module_id, path, fp)
File 
"/usr/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/env.py",
 line 116, in 
  run_migrations_online()
File 
"/usr/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/env.py",
 line 107, in run_migrations_online
  context.run_migrations()
File "", line 7, in run_migrations
File "/usr/lib/python2.7/site-packages/alembic/environment.py", line 742, 
in run_migrations
  self.get_context().run_migrations(**kw)
File "/usr/lib/python2.7/site-packages/alembic/migration.py", line 305, in 
run_migrations
  step.migration_fn(**kw)
File 
"/usr/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/versions/35a0f3365720_add_port_security_in_ml2.py",
 line 32, in upgrade
  op.execute('INSERT INTO networksecuritybindings (network_id, '
File "", line 7, in execute
File "/usr/lib/python2.7/site-packages/alembic/operations.py", line 1270, 
in execute
  execution_options=execution_options)
File "/usr/lib/python2.7/site-packages/alembic/ddl/impl.py", line 109, in 
execute
  self._exec(sql, execution_options)
File "/usr/lib/python2.7/site-packages/ibm_db_alembic/ibm_db.py", line 56, 
in _exec
  result = super(IbmDbImpl, self)._exec(construct, *args, **kw)
File "/usr/lib/python2.7/site-packages/alembic/ddl/impl.py", line 106, in 
_exec
  return conn.execute(construct, *multiparams, **params)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 
729, in execute
  return meth(self, multiparams, params)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line 
322, in _execute_on_connection
  return connection._execute_clauseelement(self, multiparams, params)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 
826, in _execute_clauseelement
  compiled_sql, distilled_params
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 
958, in _execute_context
  context)
File 
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/compat/handle_error.py", 
line 261, in _handle_dbapi_exception
  e, statement, parameters, cursor, context)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 
1155, in _handle_dbapi_exception
  util.raise_from_cause(newraise, exc_info)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 
199, in raise_from_cause
  reraise(type(exception), exception, tb=exc_tb)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 
951, in _execute_context
  context)
File "/usr/lib/python2.7/site-packages/ibm_db_sa/ibm_db.py", line 106, in 
do_execute
  cursor.execute(statement, parameters)
File "/usr/lib64/python2.7/site-packages/ibm_db_dbi.py", line 1335, in 
execute
  self._execute_helper(parameters)
File "/usr/lib64/python2.7/site-packages/ibm_db_dbi.py", line 1247, in 
_execute_helper
  raise self.messages[len(self.messages) - 1]
  oslo_db.exception.DBError: (ProgrammingError) ibm_db_dbi::ProgrammingErro

[Yahoo-eng-team] [Bug 1376325] Re: Cannot enable DVR and IPv6 simultaneously

2015-04-09 Thread Adam Gandelman
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Importance: Undecided => Medium

** Changed in: neutron/juno
   Status: New => Fix Committed

** Changed in: neutron/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1376325

Title:
  Cannot enable DVR and IPv6 simultaneously

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  Fix Committed

Bug description:
  While testing out the devstack change to support IPv6,
  https://review.openstack.org/#/c/87987/ I tripped-over a DVR error
  since I have it enabled by default in local.conf.

  I have these two things enabled in local.conf:

  Q_DVR_MODE=dvr_snat
  IP_VERSION=4+6

  After locally fixing lib/neutron to teach it about the DVR snat-
  namespace (another bug to be filed for that), stack.sh was able to
  complete, but the l3-agent wasn't very happy:

  Stderr: '' execute /opt/stack/neutron/neutron/agent/linux/utils.py:81
  2014-09-30 12:53:47.511 21778 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-95b180a4-9623-4ef9-adda-772ca0838253', 'ip', 'rule', 'add', 'from', 
'fd00::1/64', 'lookup', '336294682933583715844663186250927177729', 'priority', 
'336294682933583715844663186250927177729'] create_process 
/opt/stack/neutron/neutron/agent/linux/utils.py:46
  2014-09-30 12:53:47.641 21778 ERROR neutron.agent.linux.utils [-]
  Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-95b180a4-9623-4ef9-adda-772ca0838253', 'arping', '-A', '-I', 
'qr-3d0eda6e-54', '-c', '3', 'fd00::1']
  Exit code: 2
  Stdout: ''
  Stderr: 'arping: unknown host fd00::1\n'
  2014-09-30 12:53:47.643 21778 ERROR neutron.agent.l3_agent [-] Failed sending 
gratuitous ARP:
  Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-95b180a4-9623-4ef9-adda-772ca0838253', 'arping', '-A', '-I', 
'qr-3d0eda6e-54', '-c', '3', 'fd00::1']
  Exit code: 2
  Stdout: ''
  Stderr: 'arping: unknown host fd00::1\n'
  2014-09-30 12:53:48.682 21778 ERROR neutron.agent.linux.utils [-]
  Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-95b180a4-9623-4ef9-adda-772ca0838253', 'ip', 'rule', 'add', 'from', 
'fd00::1/64', 'lookup', '336294682933583715844663186250927177729', 'priority', 
'336294682933583715844663186250927177729']
  Exit code: 255
  Stdout: ''
  Stderr: 'Error: argument "336294682933583715844663186250927177729" is wrong: 
preference value is invalid\n\n'
  2014-09-30 12:53:48.683 21778 ERROR neutron.agent.l3_agent [-] DVR: error 
adding redirection logic
  2014-09-30 12:53:48.683 21778 TRACE neutron.agent.l3_agent Traceback (most 
recent call last):
  2014-09-30 12:53:48.683 21778 TRACE neutron.agent.l3_agent   File 
"/opt/stack/neutron/neutron/agent/l3_agent.py", line 1443, in _snat_redirect_add
  2014-09-30 12:53:48.683 21778 TRACE neutron.agent.l3_agent 
ns_ipr.add_rule_from(sn_port['ip_cidr'], snat_idx, snat_idx)
  2014-09-30 12:53:48.683 21778 TRACE neutron.agent.l3_agent   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 202, in add_rule_from
  2014-09-30 12:53:48.683 21778 TRACE neutron.agent.l3_agent ip = 
self._as_root('', 'rule', tuple(args))
  2014-09-30 12:53:48.683 21778 TRACE neutron.agent.l3_agent   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 74, in _as_root
  2014-09-30 12:53:48.683 21778 TRACE neutron.agent.l3_agent 
log_fail_as_error=self.log_fail_as_error)
  2014-09-30 12:53:48.683 21778 TRACE neutron.agent.l3_agent   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 86, in _execute
  2014-09-30 12:53:48.683 21778 TRACE neutron.agent.l3_agent 
log_fail_as_error=log_fail_as_error)
  2014-09-30 12:53:48.683 21778 TRACE neutron.agent.l3_agent   File 
"/opt/stack/neutron/neutron/agent/linux/utils.py", line 84, in execute
  2014-09-30 12:53:48.683 21778 TRACE neutron.agent.l3_agent raise 
RuntimeError(m)
  2014-09-30 12:53:48.683 21778 TRACE neutron.agent.l3_agent RuntimeError:
  2014-09-30 12:53:48.683 21778 TRACE neutron.agent.l3_agent Command: ['sudo', 
'/usr/local/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 
'exec', 'qrouter-95b180a4-9623-4ef9-adda-772ca0838253', 'ip', 'rule', 'add', 
'from', 'fd00::1/64', 'lookup', '336294682933583715844663186250927177729', 
'priority', '336294682933583715844663186250927177729']
  2014-09-30 12:53:48.683 21778 TRACE neutron.agent.l3_agent Exit code: 255
  2014-09-30 12:53:48.683 21778 TRACE neutron.agent.l3_agent Stdout: ''
  2014-09-30 12:53:48.683 21778 TRACE neutron.agent.l3_agent Stderr: 'Error: 
argu

[Yahoo-eng-team] [Bug 1408334] Re: OVS agent hangs on rpc calls if neutron-server is down and ovs-agent received SIGTERM

2015-04-09 Thread Adam Gandelman
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Importance: Undecided => Medium

** Changed in: neutron/juno
   Status: New => Fix Committed

** Changed in: neutron/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1408334

Title:
  OVS agent hangs on rpc calls if neutron-server is down and ovs-agent
  received SIGTERM

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  Fix Committed

Bug description:
  There is an infinite loop in OVS agent driven by one variable. If OVG
  agent receives SIGTERM signal and loop is running, OVS agent must wait
  until execution reaches loop control variable. If at the same time
  neutron-server is down, agent still uses rpc call() methods and waits
  for response from neutron-server. Several timeouts on rpc must occur
  until OVS agents quits. If this whole process of exiting takes more
  than 90 seconds, systemd by default sends SIGKILL to ovs-agent process
  which means ovs-agent didn't exit with exit code 0. RPC calls are not
  necessary if we know agent is going to shutdown.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1408334/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1403860] Re: L3 HA routers have IPv6 link local address on devices, periodically send traffic, moving MACs around and disrupting traffic

2015-04-09 Thread Adam Gandelman
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Importance: Undecided => High

** Changed in: neutron/juno
   Status: New => Fix Committed

** Changed in: neutron/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1403860

Title:
  L3 HA routers have IPv6 link local address on devices, periodically
  send traffic, moving MACs around and disrupting traffic

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  Fix Committed

Bug description:
  In the HA routers case we place the same Neutron port on all HA router
  instances. This means that they share the same MAC and IP addresses.
  We configure all IP addresses in keepalived.conf so that keepalived
  takes care to move the IP addresses, and configure them only on the
  master instance. The MAC address, however, is present on all HA router
  devices on all network nodes, and so is the IPv6 link local address
  that is generated from that MAC address. This means that we have an
  active (IPv6) address in multiple places in the network. Any traffic
  generated from said address on a standby node will change the MAC
  tables of the underlay network, causing it to think that the MAC
  address has moved from the master instance to any of the standbys.
  This causes network disruption.

  Severity / reproduction:
  Create an HA router on a setup with 3 network nodes. The HA router is created 
on all nodes. Connect it to an internal and external network. Create an 
instance and configure it with a floating IP. Ping the floating IP: Every two 
minutes, we've observed the standby nodes sending an ICMPv6 multicast listener 
report. The MAC address of the external interface of the master router will now 
move (From the perspective of the underlay), causing traffic to not reach the 
correct (Master) node. After 30 seconds of packet loss the client will re-issue 
an ARP request for the IPv4 address, which the master will answer, moving the 
MAC back and fixing the issue. This repeats every 2 minutes, with 30 seconds of 
packet loss, resulting in 75% up-time. Note: I think we can do better than 75%.

  Solutions:
  The sledgehammer solution would be to shut down all NICs on standby routers 
and open them on the master instance using the keepalived notifier scripts. In 
the spirit of keeping these scripts as lightweight as possible, I'd like to 
solve this issue instead by handling the IPv6 link local address like we do 
with IPv4 addresses: Not configuring them on the device, but adding them as a 
VIP to keepalived.conf and let keepalived configure the address on the master 
node only.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1403860/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255441] Re: annoying Arguments dropped when creating context

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1255441

Title:
  annoying Arguments dropped when creating context

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  2013-11-27 16:45:11.379 5568 WARNING neutron.context 
[req-fa5e9429-2fc4-4b3f-ab56-cf00e517474f None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:11.593 5568 WARNING neutron.context 
[req-115c640c-5e42-495c-a17f-f771b49b951e None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:12.194 5568 WARNING neutron.context 
[req-9703f175-c339-4a18-88a9-3d3523167e2c None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:12.909 5568 WARNING neutron.context 
[req-2d1bd4b5-dcc1-4cfe-b29c-711a938ab74e None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:14.719 5568 WARNING neutron.context 
[req-edcf1bf6-39f1-493a-b72f-f7a484a1ba02 None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:15.383 5568 WARNING neutron.context 
[req-fa5e9429-2fc4-4b3f-ab56-cf00e517474f None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:15.594 5568 WARNING neutron.context 
[req-115c640c-5e42-495c-a17f-f771b49b951e None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:16.194 5568 WARNING neutron.context 
[req-9703f175-c339-4a18-88a9-3d3523167e2c None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:16.912 5568 WARNING neutron.context 
[req-21f20201-2691-4c13-a77d-3e05c0ad1777 None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:18.722 5568 WARNING neutron.context 
[req-edcf1bf6-39f1-493a-b72f-f7a484a1ba02 None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:19.387 5568 WARNING neutron.context 
[req-fa5e9429-2fc4-4b3f-ab56-cf00e517474f None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:19.596 5568 WARNING neutron.context 
[req-115c640c-5e42-495c-a17f-f771b49b951e None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:20.194 5568 WARNING neutron.context 
[req-9703f175-c339-4a18-88a9-3d3523167e2c None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:20.914 5568 WARNING neutron.context 
[req-f89f6bde-b6b8-4f32-a897-30164c826bc0 None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:22.719 5568 WARNING neutron.context 
[req-edcf1bf6-39f1-493a-b72f-f7a484a1ba02 None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:23.390 5568 WARNING neutron.context 
[req-fa5e9429-2fc4-4b3f-ab56-cf00e517474f None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:23.595 5568 WARNING neutron.context 
[req-115c640c-5e42-495c-a17f-f771b49b951e None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:24.194 5568 WARNING neutron.context 
[req-9703f175-c339-4a18-88a9-3d3523167e2c None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:24.916 5568 WARNING neutron.context 
[req-2da8f49d-77d1-49bf-9b3d-081075b1c1de None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:26.723 5568 WARNING neutron.context 
[req-edcf1bf6-39f1-493a-b72f-f7a484a1ba02 None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:27.394 5568 WARNING neutron.context 
[req-fa5e9429-2fc4-4b3f-ab56-cf00e517474f None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:27.598 5568 WARNING neutron.context 
[req-115c640c-5e42-495c-a17f-f771b49b951e None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:28.195 5568 WARNING neutron.context 
[req-9703f175-c339-4a18-88a9-3d3523167e2c None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:28.915 5568 WARNING neutron.context 
[req-9846d3db-f327-4df5-a4b9-ea83e1da0a4b None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:30.725 5568 WARNING neutron.context 
[req-edcf1bf6-39f1-493a-b72f-f7a484a1ba02 None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:31.395 5568 WARNING neutron.context 
[req-fa5e9429-2fc4-4b3f-ab56-cf00e517474f None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:31.599 5568 WARNING neutron.context 
[req-115c640c-5e42-495c-a17f-f771b49b951e None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:32.196 5568 WARNI

[Yahoo-eng-team] [Bug 1365727] Re: Tenant able to create networks using N1kv network profiles not explicitly assigned to it

2015-04-09 Thread Adam Gandelman
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Status: New => Fix Committed

** Changed in: neutron/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1365727

Title:
  Tenant able to create networks using N1kv network profiles not
  explicitly assigned to it

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  Fix Committed
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  Tenants are able to create networks using network profiles that are
  not explicitly assigned to them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1365727/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420032] Re: remove_router_interface doesn't scale well with dvr routers

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1420032

Title:
  remove_router_interface doesn't scale well with dvr routers

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  With dvr enabled , neutron remove-router-interface significantly
  degrades in response time as the number of l3_agents and the number of
  routers increases.   A significant contributor to the poor performance
  is due to check_ports_exist_on_l3agent.  The call to
  get_subnet_ids_on_router returns an empty list since the port has
  already been deleted by this point.  The empty subnet list is then
  used as a filter to the subsequent call core_plugin.get_ports which
  unexpectedly returns all ports instead of an empty list of ports.
  Erroneously looping through the entire list of ports is the biggest
  contributor to the poor scalability.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1420032/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414218] Re: Remove extraneous trace in linux/dhcp.py

2015-04-09 Thread Adam Gandelman
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Status: New => Fix Committed

** Changed in: neutron/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1414218

Title:
  Remove extraneous trace in linux/dhcp.py

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  Fix Committed

Bug description:
  The debug tracepoint in Dnsmasq._output_hosts_file is extraneous and
  causes unnecessary performance overhead due to string formating when
  creating lots (> 1000) ports at one time.

  The trace point is unnecessary since the data is being written to disk
  and the file can be examined in a worst case scenario. The added
  performance overhead is an order of magnitude in difference (~.5
  seconds versus ~.05 seconds at 1500 ports).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1414218/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1442127] [NEW] nova boot fails when there is another nova instance deploying

2015-04-09 Thread baiyuan
Public bug reported:

1,
nova version:root@byrh09:/opt/stack/nova# git log -1
commit 9c45ff348d7eb75ae8098b6c950db549aaff282a
Merge: 480c419 07b5373
Author: Jenkins 
Date:   Tue Mar 31 08:14:43 2015 +

Merge "Fix API links and labels"
2, reproduce steps:
in my environment, there are 2 ironic baremetal hosts; I want to use nova boot 
to deploy these 2 nodes; each nova flavor can select one baremetal host;
When I nova boot one instance, it start to deploy a baremetal host, the task 
state is apawning,power state is NOSTATE, status is build, it work well and it 
start to deploy the right host based on nova flavor; then I execute another 
different nova boot for another instance, the second nova boot fails and it 
cannot find the right servers, the second baremetal host;
If the first nova boot finished, and its status is ACTIVE, power state is 
running, at this time, I execute the same second nova boot command, it can 
select the second right baremetal host;
I think if different nova boot command use different  flavor and different 
instance name, there are enough baremetal nodes, these nova boot  should work 
well;
My commands are as following:
a, nova boot --flavor 21_baremetal --image $XCAT_IMAGE test_xcat  --nic 
net-id=$net_id --key-name xcat_key
b, nova boot --flavor 23_baremetal --image $XCAT_IMAGE test_pcm  --nic 
net-id=$net_id --key-name xcat_key

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1442127

Title:
  nova boot fails when there is another nova instance deploying

Status in OpenStack Compute (Nova):
  New

Bug description:
  1,
  nova version:root@byrh09:/opt/stack/nova# git log -1
  commit 9c45ff348d7eb75ae8098b6c950db549aaff282a
  Merge: 480c419 07b5373
  Author: Jenkins 
  Date:   Tue Mar 31 08:14:43 2015 +

  Merge "Fix API links and labels"
  2, reproduce steps:
  in my environment, there are 2 ironic baremetal hosts; I want to use nova 
boot to deploy these 2 nodes; each nova flavor can select one baremetal host;
  When I nova boot one instance, it start to deploy a baremetal host, the task 
state is apawning,power state is NOSTATE, status is build, it work well and it 
start to deploy the right host based on nova flavor; then I execute another 
different nova boot for another instance, the second nova boot fails and it 
cannot find the right servers, the second baremetal host;
  If the first nova boot finished, and its status is ACTIVE, power state is 
running, at this time, I execute the same second nova boot command, it can 
select the second right baremetal host;
  I think if different nova boot command use different  flavor and different 
instance name, there are enough baremetal nodes, these nova boot  should work 
well;
  My commands are as following:
  a, nova boot --flavor 21_baremetal --image $XCAT_IMAGE test_xcat  --nic 
net-id=$net_id --key-name xcat_key
  b, nova boot --flavor 23_baremetal --image $XCAT_IMAGE test_pcm  --nic 
net-id=$net_id --key-name xcat_key

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1442127/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367786] Re: Hyper-V driver should log a clear error message during migrations for remote node permissions errors

2015-04-09 Thread Adam Gandelman
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Importance: Undecided => Low

** Changed in: nova/juno
   Status: New => Fix Committed

** Changed in: nova/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367786

Title:
  Hyper-V driver should log a clear error message during migrations for
  remote node permissions errors

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Committed

Bug description:
  When failing to access a remote SMB UNC path, Python raises the
  following exception:

  WindowsError: [Error 123] The filename, directory name, or volume
  label syntax is incorrect: ''

  This is definitely misleading when troubleshooting the issue, which
  occurs during resize / cold migrations.

  The Nova driver should report a clear error message, making sure the
  user understands the full context.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367786/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408230] Re: name validate check is necessary for neutron-core

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

** Changed in: neutron
Milestone: None => kilo-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1408230

Title:
  name validate check is necessary for neutron-core

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  So far, the validate of name when creating network subnet and port is not 
functional.
  This is because the validate attribute of name in the RESOURCE_ATTRIBUTE_MAP 
of neutron/api/v2/attributes.py is set to "None".
  When user inputs a more than 255 length name on purpose, an internal DB Error 
will return.
  ==CLI result: network==
  $ neutron net-create 
1234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678
  Request Failed: internal server error while processing your request.
  ===
  ==CLI result: subnet==
  $ neutron subnet-create hogehoge --name 
1234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678
 192.168.1.0/24
  Request Failed: internal server error while processing your request.
  ===
  ==CLI result: port==
  stack@neutron-ctrl:~/devstack$ neutron port-create hogehoge --name 
1234567812345
  
67812345678123456781234567812345678123456781234567812345678123456781234567812345
  
67812345678123456781234567812345678123456781234567812345678123456781234567812345
  
67812345678123456781234567812345678123456781234567812345678123456781234567812345
  678
  Request Failed: internal server error while processing your request.
  
  ==Trace log: network==
  2015-01-08 02:11:05.152 2469 TRACE neutron.api.v2.resource DBError: 
(DataError) (1406, "Data too long for column 'name' at row 1") 'INSERT INTO 
networks (tenant_id, id, name, status, admin_state_up, shared) VALUES (%s, %s, 
%s, %s, %s, %s)' ('ea398e22f5b74d8aa7ed19a41269690e', 
'f9cd32ac-6fb6-4a2f-9fd9-d8c48df5ade0', 
'1234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678',
 'ACTIVE', 1, 0)
  
  ==Trace log: subnet==
  2015-01-08 01:54:56.821 2469 TRACE neutron.api.v2.resource DBError: 
(DataError) (1406, "Data too long for column 'name' at row 1") 'INSERT INTO 
subnets (tenant_id, id, name, network_id, ip_version, cidr, gateway_ip, 
enable_dhcp, shared, ipv6_ra_mode, ipv6_address_mode) VALUES (%s, %s, %s, %s, 
%s, %s, %s, %s, %s, %s, %s)' ('ea398e22f5b74d8aa7ed19a41269690e', 
'3dde1013-8f9c-41f6-9d87-0a47bac77ab1', 
'1234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678',
 '5d61a7be-f0e0-4391-930a-62c100a4dcad', 4, '192.168.1.0/24', '192.168.1.1', 1, 
0, None, None)
  
  ==Trace log: port==
  2015-01-08 02:00:11.032 2469 TRACE neutron.api.v2.resource DBError: 
(DataError) (1406, "Data too long for column 'name' at row 1") 'INSERT INTO 
ports (tenant_id, id, name, network_id, mac_address, admin_state_up, status, 
device_id, device_owner) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s)' 
('ea398e22f5b74d8aa7ed19a41269690e', '983d6be6-987b-4eb7-94e7-a85518600b3c', 
'1234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678',
 '5d61a7be-f0e0-4391-930a-62c100a4dcad', 'fa:16:3e:1d:f6:cb', 1, 'DOWN', '', '')
  ===

  It is better for neutron to return something like 400 Bad Request Error 
instead of internal DB Error.
  The validate of name length should be limited to 255.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1408230/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1410622] Re: nova is still broken with boto==2.35*

2015-04-09 Thread Adam Gandelman
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Importance: Undecided => High

** Changed in: nova/juno
   Status: New => Fix Committed

** Changed in: nova/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1410622

Title:
  nova is still broken with boto==2.35*

Status in OpenStack Identity (Keystone):
  Invalid
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Committed

Bug description:
  Bug 1408987 fixed one auth issue with the signature handling:

  https://review.openstack.org/#/c/146124/

  But when trying to uncap the requirement on master we hit two new
  failures when trying to create a security group, we get auth failures
  (401 failures to be exact).

  Copied from comment 14 of bug 1408987:

  This is still broken on master, when I tried to uncap the boto version
  on master I get new auth failures:

  http://logs.openstack.org/92/146592/1/check/check-tempest-dsvm-
  full/7c375f8/console.html#_2015-01-12_19_11_36_102

  2015-01-12 19:11:36.102 | 
tempest.thirdparty.boto.test_ec2_security_groups.EC2SecurityGroupTest.test_create_authorize_security_group
  2015-01-12 19:11:36.102 | 
--
  2015-01-12 19:11:36.102 |
  2015-01-12 19:11:36.102 | Captured traceback:
  2015-01-12 19:11:36.102 | ~~~
  2015-01-12 19:11:36.103 | Traceback (most recent call last):
  2015-01-12 19:11:36.103 | _StringException: Empty attachments:
  2015-01-12 19:11:36.103 | stderr
  2015-01-12 19:11:36.103 | stdout
  2015-01-12 19:11:36.103 |
  2015-01-12 19:11:36.103 | pythonlogging:'': {{{
  2015-01-12 19:11:36.103 | 2015-01-12 19:07:12,279 27381 DEBUG 
[keystoneclient.auth.identity.v2] Making authentication request to 
http://127.0.0.1:5000/v2.0/tokens
  2015-01-12 19:11:36.103 | 2015-01-12 19:07:13,359 27381 ERROR [boto] 401 
Unauthorized
  2015-01-12 19:11:36.103 | 2015-01-12 19:07:13,359 27381 ERROR [boto] 
  2015-01-12 19:11:36.103 | 
AuthFailureUnauthorizedreq-81391f74-7caf-42a6-a3b8-ccd2c7d1cbdf
  2015-01-12 19:11:36.104 | }}}
  2015-01-12 19:11:36.104 |
  2015-01-12 19:11:36.104 | Traceback (most recent call last):
  2015-01-12 19:11:36.104 | File 
"tempest/thirdparty/boto/test_ec2_security_groups.py", line 32, in 
test_create_authorize_security_group
  2015-01-12 19:11:36.104 | group_description)
  2015-01-12 19:11:36.104 | File "tempest/services/botoclients.py", line 84, in 
func
  2015-01-12 19:11:36.104 | return getattr(conn, name)(*args, **kwargs)
  2015-01-12 19:11:36.104 | File 
"/usr/local/lib/python2.7/dist-packages/boto/ec2/connection.py", line 3003, in 
create_security_group
  2015-01-12 19:11:36.104 | SecurityGroup, verb='POST')
  2015-01-12 19:11:36.105 | File 
"/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 1207, in 
get_object
  2015-01-12 19:11:36.105 | raise self.ResponseError(response.status, 
response.reason, body)
  2015-01-12 19:11:36.105 | EC2ResponseError: EC2ResponseError: 401 Unauthorized
  2015-01-12 19:11:36.105 | 
  2015-01-12 19:11:36.105 | 
AuthFailureUnauthorizedreq-81391f74-7caf-42a6-a3b8-ccd2c7d1cbdf

  It's something to do with security groups this time.

  http://logs.openstack.org/92/146592/1/check/check-tempest-dsvm-
  full/7c375f8/logs/screen-n-api.txt.gz#_2015-01-12_19_07_13_357

  2015-01-12 19:07:13.357 24624 DEBUG nova.api.ec2.faults [-] EC2 error
  response: AuthFailure: Unauthorized ec2_error_response
  /opt/stack/new/nova/nova/api/ec2/faults.py:29

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1410622/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1437131] Re: neutron get_subnet is slow

2015-04-09 Thread Adam Gandelman
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Importance: Undecided => Low

** Changed in: neutron/juno
   Status: New => Fix Committed

** Changed in: neutron/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1437131

Title:
  neutron get_subnet is slow

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  Fix Committed

Bug description:
  get_subnet becomes slow with large numbers of ip availability
  ranges.[1] This plagues anything else that calls get_subnet or
  get_subnets, which is lots of stuff.


  
  1.  $ time neutron subnet-show 9f4d3afa-5e13-4031-a24e-923adf36efb1
  +---++
  | Field | Value  |
  +---++
  | allocation_pools  | {"start": "0.0.0.2", "end": "0.0.255.253"} |
  | cidr  | 0.0.0.0/16 |
  | dns_nameservers   ||
  | enable_dhcp   | True   |
  | gateway_ip| 0.0.0.1|
  | host_routes   ||
  | id| 9f4d3afa-5e13-4031-a24e-923adf36efb1   |
  | ip_version| 4  |
  | ipv6_address_mode ||
  | ipv6_ra_mode  ||
  | name  ||
  | network_id| b316a86e-1582-4b56-bbc1-d68dd6635ac9   |
  | tenant_id | dcbae871d5bd419ea3c17cfb89e8a9c7   |
  +---++

  real  0m4.307s
  user  0m0.353s
  sys   0m0.074s

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1437131/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1429162] Re: When configured, MTU not set for fpr and rfp interfaces on DVR

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1429162

Title:
  When configured, MTU not set for fpr and rfp interfaces on DVR

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  The network_device_mtu can be set for a non-default MTU size.
  Currently network_device_mtu configuration parameter is not honored
  for interfaces between a router namespace and the fip namespace for
  the DVR.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1429162/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441990] [NEW] There is no bootable disk when create vm from image with bdm args.

2015-04-09 Thread wanghao
Public bug reported:

Version of Nova in my Devstack :
--
commit c1e383ca55bee667ff6cfdaa35a213a61ea7ab3a
Merge: 581ca4f e0a3d60
Author: Jenkins 
Date:   Wed Mar 18 01:06:50 2015 +
--

When I create a vm from image and take the bdm args like this :

nova boot --image 0c466b13-163c-4f2f-adfd-cde58d79b33c --flavor
wanghaotype --nic net-id=b8708653-66e4-4f33-91a7-97eb0c65b54e --block-
device source=blank,dest=volume,device=vdb,bootindex=0,size=1 wanghao

It worke but the vm don't have bootable disk because the vm's xml only
have this :


  
  
  
  e264ca3b-a23a-4d51-9f27-0561c9fca75c
  
  


I think we should allow the bootindex=1 when creating vm from image with
a blank volume. Or we forbid this kind of parameter combination.

** Affects: nova
 Importance: Undecided
 Assignee: wanghao (wanghao749)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => wanghao (wanghao749)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1441990

Title:
  There is no bootable disk when create vm from image with bdm args.

Status in OpenStack Compute (Nova):
  New

Bug description:
  Version of Nova in my Devstack :
  
--
  commit c1e383ca55bee667ff6cfdaa35a213a61ea7ab3a
  Merge: 581ca4f e0a3d60
  Author: Jenkins 
  Date:   Wed Mar 18 01:06:50 2015 +
  
--

  When I create a vm from image and take the bdm args like this :

  nova boot --image 0c466b13-163c-4f2f-adfd-cde58d79b33c --flavor
  wanghaotype --nic net-id=b8708653-66e4-4f33-91a7-97eb0c65b54e --block-
  device source=blank,dest=volume,device=vdb,bootindex=0,size=1 wanghao

  It worke but the vm don't have bootable disk because the vm's xml only
  have this :

  



e264ca3b-a23a-4d51-9f27-0561c9fca75c


  

  I think we should allow the bootindex=1 when creating vm from image
  with a blank volume. Or we forbid this kind of parameter combination.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1441990/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394867] Re: test_validate_nameservers fails on some platforms

2015-04-09 Thread Adam Gandelman
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Status: New => Fix Committed

** Changed in: neutron/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1394867

Title:
  test_validate_nameservers fails on some platforms

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  Fix Committed

Bug description:
  the recently added '1'*59 case fails on some platforms because it's 
considered a valid ipv4 address by inet_aton.
  eg. NetBSD, OS X

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1394867/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1442004] [NEW] instance group data model allows multiple polices

2015-04-09 Thread Attila Fazekas
Public bug reported:

Now only two policy available, but only one can be used with a server
group.

$  nova server-group-create name "affinity" "anti-affinity"
ERROR (BadRequest): Invalid input received: Conflicting policies configured! 
(HTTP 400) (Request-ID: req-1af553f8-5fd6-4227-870b-be963aad2b62)
$  nova server-group-create name "affinity" "affinity"
ERROR (BadRequest): Invalid input received: Duplicate policies configured! 
(HTTP 400) (Request-ID: req-4b697798-89ec-48e1-9840-5e627c08657b)

The 
https://review.openstack.org/#/c/168372/1/specs/liberty/approved/soft-affinity-for-server-group.rst,cm,
contains two additional policy name,  but

"These new soft-affinity and soft-anti-affinity policies are mutually
exclusive with each other and with the other existing server-group
policies."

Suggesting to remove the 'instance_group_policy' table and add the
'policy' field to the 'instance_groups' tables.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1442004

Title:
  instance group data model allows multiple polices

Status in OpenStack Compute (Nova):
  New

Bug description:
  Now only two policy available, but only one can be used with a server
  group.

  $  nova server-group-create name "affinity" "anti-affinity"
  ERROR (BadRequest): Invalid input received: Conflicting policies configured! 
(HTTP 400) (Request-ID: req-1af553f8-5fd6-4227-870b-be963aad2b62)
  $  nova server-group-create name "affinity" "affinity"
  ERROR (BadRequest): Invalid input received: Duplicate policies configured! 
(HTTP 400) (Request-ID: req-4b697798-89ec-48e1-9840-5e627c08657b)

  The 
https://review.openstack.org/#/c/168372/1/specs/liberty/approved/soft-affinity-for-server-group.rst,cm,
  contains two additional policy name,  but

  "These new soft-affinity and soft-anti-affinity policies are mutually
  exclusive with each other and with the other existing server-group
  policies."

  Suggesting to remove the 'instance_group_policy' table and add the
  'policy' field to the 'instance_groups' tables.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1442004/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1196851] Re: Ports for floating IP's are in state DOWN

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

** Changed in: neutron
Milestone: None => kilo-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1196851

Title:
  Ports for floating IP's are in state DOWN

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  Some plugins, for example OVS and linux bridge have port status as
  DOWN instead of ACTIVE for a active floating IP PORT

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1196851/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1426365] Re: Ml2 Mechanism Driver for OVSvApp Solution

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1426365

Title:
  Ml2 Mechanism Driver for OVSvApp Solution

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  With the introduction of stackforge/networking-vsphere project which
  includes the OVSvApp L2 agent for doing vsphere networking  using
  neutron.

  We need to have thin mechanism driver in neutron which integrates the
  ml2 plugin with the  OVSvApp L2 Agent.

  The mechanism driver implements the abstract method given in
  mech_agent.SimpleAgentMechanismDriverBase and contains the RPC's
  specific to OVSvApp L2 Agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1426365/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1430994] Re: Some strings split across lines are missing a space

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

** Changed in: neutron
Milestone: None => kilo-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1430994

Title:
  Some strings split across lines are missing a space

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  For example,

   msg = (_("Validation of dictionary's keys failed."
   "Expected keys: %(expected_keys)s "

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1430994/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1423775] Re: TRACE when restarting openvswitch if using OVS DVR agent

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

** Changed in: neutron
Milestone: None => kilo-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1423775

Title:
  TRACE when restarting openvswitch if using OVS DVR agent

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  ovs_neutron_agent checks the openvswitch service status during its rpc
  loop. If the service was restarted, the agent picks up on it and
  reconfigures all OVS flows on all bridges. In the DVR agent case, the
  agent raises an exception instead as it references the recently
  removed (https://review.openstack.org/#/c/129884/)
  self.dvr_agent.setup_dvr_flows_on_integ_tun_br.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1423775/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1431248] Re: neutron dhcp agent doesn't respond to it's port ip change

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

** Changed in: neutron
Milestone: None => kilo-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1431248

Title:
  neutron dhcp agent doesn't respond to it's port ip change

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  The neutron DHCP agent doesn't update its interface's IP address to
  reflect a change to its corresponding neutron port.

  How to recreate:

  Create a network with a dhcp agent.
  Use neutron port-update on the dhcp agent's port to change the IP.
  DHCP agent is still on old IP address.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1431248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408529] Re: nova boot vm with '--nic net-id=xxxx, v4-fixed-ip=xxx' failed

2015-04-09 Thread Adam Gandelman
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Importance: Undecided => Low

** Changed in: nova/juno
   Status: New => Fix Committed

** Changed in: nova/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1408529

Title:
  nova boot vm with '--nic net-id=, v4-fixed-ip=xxx' failed

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Committed
Status in Python client library for Neutron:
  Fix Released

Bug description:
  now nova boot vm with '--nic net-id=, v4-fixed-ip=xxx' will
  failed, the error in nova-compute log is bellow:

File "/opt/stack/nova/nova/network/neutronv2/__init__.py", line 84
  , in wrapper
  ret = obj(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/cl
  ient.py", line 1266, in serialize
  self.get_attr_metadata()).serialize(data, self.content_type())
File "/usr/local/lib/python2.7/dist-packages/neutronclient/common/
  serializer.py", line 390, in serialize
  return self._get_serialize_handler(content_type).serialize(data)
File "/usr/local/lib/python2.7/dist-packages/neutronclient/common/
  serializer.py", line 54, in serialize
  return self.dispatch(data, action=action)
File "/usr/local/lib/python2.7/dist-packages/neutronclient/common/
  serializer.py", line 44, in dispatch
  return action_method(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/neutronclient/common/
  serializer.py", line 66, in default
  return jsonutils.dumps(data, default=sanitizer)
File "/usr/local/lib/python2.7/dist-packages/neutronclient/opensta
  ck/common/jsonutils.py", line 168, in dumps
  return json.dumps(value, default=default, **kwargs)
File "/usr/lib/python2.7/json/__init__.py", line 250, in dumps
  sort_keys=sort_keys, **kw).encode(obj)
File "/usr/lib/python2.7/json/encoder.py", line 207, in encode
  chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python2.7/json/encoder.py", line 270, in iterencode
  return _iterencode(o, 0)
File "/usr/local/lib/python2.7/dist-packages/neutronclient/common/
  serializer.py", line 65, in sanitizer
  return six.text_type(obj, 'utf8')
  TypeError: coercing to Unicode: need string or buffer, IPAddress fou
  nd

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1408529/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1430383] Re: "libvirtError: Network filter not found: no nwfilter with matching name" tracing a ton since 3/3

2015-04-09 Thread Adam Gandelman
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Importance: Undecided => Medium

** Changed in: nova/juno
   Status: New => Fix Committed

** Changed in: nova/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1430383

Title:
  "libvirtError: Network filter not found: no nwfilter with matching
  name" tracing a ton since 3/3

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Committed

Bug description:
  http://logs.openstack.org/74/162774/2/gate/gate-tempest-dsvm-
  full/8d8876c/logs/screen-n-cpu.txt.gz?level=TRACE

  Seeing a lot of traces like this since 3/3:

  2015-03-10 06:04:45.475 20202 TRACE nova.virt.libvirt.firewall Traceback 
(most recent call last):
  2015-03-10 06:04:45.475 20202 TRACE nova.virt.libvirt.firewall   File 
"/opt/stack/new/nova/nova/virt/libvirt/firewall.py", line 249, in 
_get_filter_uuid
  2015-03-10 06:04:45.475 20202 TRACE nova.virt.libvirt.firewall flt = 
self._conn.nwfilterLookupByName(name)
  2015-03-10 06:04:45.475 20202 TRACE nova.virt.libvirt.firewall   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 183, in doit
  2015-03-10 06:04:45.475 20202 TRACE nova.virt.libvirt.firewall result = 
proxy_call(self._autowrap, f, *args, **kwargs)
  2015-03-10 06:04:45.475 20202 TRACE nova.virt.libvirt.firewall   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 141, in 
proxy_call
  2015-03-10 06:04:45.475 20202 TRACE nova.virt.libvirt.firewall rv = 
execute(f, *args, **kwargs)
  2015-03-10 06:04:45.475 20202 TRACE nova.virt.libvirt.firewall   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 122, in execute
  2015-03-10 06:04:45.475 20202 TRACE nova.virt.libvirt.firewall 
six.reraise(c, e, tb)
  2015-03-10 06:04:45.475 20202 TRACE nova.virt.libvirt.firewall   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 80, in tworker
  2015-03-10 06:04:45.475 20202 TRACE nova.virt.libvirt.firewall rv = 
meth(*args, **kwargs)
  2015-03-10 06:04:45.475 20202 TRACE nova.virt.libvirt.firewall   File 
"/usr/lib/python2.7/dist-packages/libvirt.py", line 3783, in 
nwfilterLookupByName
  2015-03-10 06:04:45.475 20202 TRACE nova.virt.libvirt.firewall if ret is 
None:raise libvirtError('virNWFilterLookupByName() failed', conn=self)
  2015-03-10 06:04:45.475 20202 TRACE nova.virt.libvirt.firewall libvirtError: 
Network filter not found: no nwfilter with matching name 'nova-no-nd-reflection'

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwibGlidmlydEVycm9yOiBOZXR3b3JrIGZpbHRlciBub3QgZm91bmQ6IG5vIG53ZmlsdGVyIHdpdGggbWF0Y2hpbmcgbmFtZSAnbm92YS1uby1uZC1yZWZsZWN0aW9uJ1wiIEFORCB0YWdzOlwic2NyZWVuLW4tY3B1LnR4dFwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiJjdXN0b20iLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsiZnJvbSI6IjIwMTUtMDMtMDFUMTU6MDk6MjQrMDA6MDAiLCJ0byI6IjIwMTUtMDMtMTBUMTU6MDk6MjQrMDA6MDAiLCJ1c2VyX2ludGVydmFsIjoiMCJ9LCJzdGFtcCI6MTQyNjAwMDIzMDY0NH0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1430383/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1433226] Re: Add some unit tests for ipsec strongswan vpnaas driver

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

** Changed in: neutron
Milestone: None => kilo-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1433226

Title:
  Add some unit tests for ipsec strongswan vpnaas driver

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  Add some unit tests for ipsec strongswan vpnaas driver

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1433226/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414905] Re: usage errors in joinedload + filter in agent schedulers

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1414905

Title:
  usage errors in joinedload + filter in agent schedulers

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  With the current coding, joinedload() produces a JOIN and
  the following filter() on the columns from the joined table
  would create another JOIN of the same table.  It doesn't seem
  to be the intended behaviour.  As a consequence the filter
  doesn't work as expected.

  see
  http://docs.sqlalchemy.org/en/rel_0_9/orm/loading_relationships.html
  #contains-eager

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1414905/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1425844] Re: metadata start with db connections

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1425844

Title:
  metadata start with db connections

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  metadata agent should not try to connect db since it does not need db
  connections at all

  $ neutron-metadata-agent --config-file /etc/neutron/neutron.conf
  --config-file /etc/neutron/metadata_agent.ini

  2015-02-26 17:06:09.768 3045 WARNING oslo_db.sqlalchemy.session [-] SQL 
connection failed. 10 attempts left.
  10.16.91.1:5672

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1425844/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399498] Re: centos 7 unit test fails

2015-04-09 Thread Adam Gandelman
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Importance: Undecided => Low

** Changed in: nova/juno
   Status: New => Fix Committed

** Changed in: nova/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1399498

Title:
  centos 7 unit test fails

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Committed

Bug description:
  centos 7 unit test fails.

  to pass this test:
  export OPENSSL_ENABLE_MD5_VERIFY=1
  export NSS_HASH_ALG_SUPPORT=+MD5 

  
  # ./run_tests.sh -V -s nova.tests.unit.test_crypto.X509Test
  Running `tools/with_venv.sh python setup.py testr --testr-args='--subunit 
--concurrency 0  nova.tests.unit.test_crypto.X509Test'`
  nova.tests.unit.test_crypto.X509Test
  test_encrypt_decrypt_x509 OK  2.73
  test_can_generate_x509FAIL

  Slowest 2 tests took 6.24 secs:
  nova.tests.unit.test_crypto.X509Test
  test_can_generate_x5093.51
  test_encrypt_decrypt_x509 2.73

  ==
  FAIL: nova.tests.unit.test_crypto.X509Test.test_can_generate_x509
  --

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1399498/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427465] Re: need to add mandatory parameters for Router Info in vArmour fwaas agent

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1427465

Title:
  need to add mandatory parameters for Router Info in vArmour fwaas
  agent

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  https://github.com/openstack/neutron-
  
fwaas/blob/master/neutron_fwaas/services/firewall/agents/varmour/varmour_router.py#L64

  vArmour L3 agent _router_added is calling
  neutron.agent.l3.router_info.RouterInfo__init__ and its not passing
  mandatory parameters (interface_driver, agent_conf). These parameters
  were introduced in change IDs:

  I33a23eb37678d94cea3ace8afe090935b1e70685
  I0ec75d731d816955c1915e283a137abcb51ac232

  The unit tests that would catch this error:

  https://github.com/openstack/neutron-
  
fwaas/blob/master/neutron_fwaas/tests/unit/services/firewall/drivers/varmour/test_varmour_fwaas.py#L153

  Are being skipped at the gate (Which is essentially another bug, why do these 
unit tests require a REST call to succeed, and are failing to complete the REST 
call with the default configuration?)
  
https://github.com/openstack/neutron-fwaas/blob/master/neutron_fwaas/tests/unit/services/firewall/drivers/varmour/test_varmour_fwaas.py#L182

  Finally, 3rd party vArmour CI is not being run.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1427465/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348818] Re: Unittests do not succeed with random PYTHONHASHSEED value

2015-04-09 Thread Thierry Carrez
** Changed in: ceilometer
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348818

Title:
  Unittests do not succeed with random PYTHONHASHSEED value

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  In Progress
Status in Cinder icehouse series:
  Fix Committed
Status in Designate:
  Triaged
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance icehouse series:
  Fix Committed
Status in Orchestration API (Heat):
  Triaged
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Dashboard (Horizon) icehouse series:
  In Progress
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone icehouse series:
  Fix Committed
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  In Progress
Status in Python client library for Glance:
  Fix Committed
Status in Python client library for Neutron:
  Fix Committed
Status in OpenStack Data Processing (Sahara):
  Fix Released
Status in Openstack Database (Trove):
  Fix Released
Status in Web Services Made Easy:
  New

Bug description:
  New tox and python3.3 set a random PYTHONHASHSEED value by default.
  These projects should support this in their unittests so that we do
  not have to override the PYTHONHASHSEED value and potentially let bugs
  into these projects.

  To reproduce these failures:

  # install latest tox
  pip install --upgrade tox
  tox --version # should report 1.7.2 or greater
  cd $PROJECT_REPO
  # edit tox.ini to remove any PYTHONHASHSEED=0 lines
  tox -epy27

  Most of these failures appear to be related to dict entry ordering.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1348818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1425258] Re: test_list_baremetal_nodes race fails with a node not found 404

2015-04-09 Thread Adam Gandelman
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Importance: Undecided => Medium

** Changed in: nova/juno
   Status: New => Fix Committed

** Changed in: nova/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1425258

Title:
  test_list_baremetal_nodes race fails with a node not found 404

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Invalid
Status in OpenStack Compute (Nova):
  Invalid
Status in OpenStack Compute (nova) juno series:
  Fix Committed
Status in Tempest:
  Fix Released

Bug description:
  http://logs.openstack.org/35/158435/1/check/check-grenade-dsvm-ironic-
  sideways/2beafaf/logs/new/screen-n-api.txt.gz?level=TRACE

  Apparently this is unhandled and we get a 500 response:

  http://logs.openstack.org/35/158435/1/check/check-grenade-dsvm-ironic-
  sideways/2beafaf/console.html#_2015-02-23_22_11_18_978

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiYmFyZW1ldGFsX25vZGVzLnB5XCIgQU5EIG1lc3NhZ2U6XCJOb2RlXCIgQU5EIG1lc3NhZ2U6XCJjb3VsZCBub3QgYmUgZm91bmRcIiBBTkQgdGFnczpcInNjcmVlbi1uLWFwaS50eHRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQyNDgwNjgwMzM5MX0=

  21 hits in the last 7 days, check and gate, master and stable/juno,
  all failures.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1425258/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441776] Re: Get image file passing image id without an image file returns a 204 response

2015-04-09 Thread Kamil Rykowski
The behaviour of returning 204 instead of 404 when image exists but
there is no data associated with it has been introduced due to this bug
report - https://bugs.launchpad.net/glance/+bug/1251055

It's well documented at http://developer.openstack.org/api-ref-
image-v2.html: "If no image data exists, you receive the HTTP 204 status
code."

** Changed in: glance
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1441776

Title:
  Get image file passing image id without an image file returns a 204
  response

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  Overview:
  Attempting a GET /images//file returns a 204 response. At one point 
this returned a 404, but that does not appear to be the case anymore.

  Steps to reproduce:
  1)Register a blank image via POST /images
  2)Attempt a GET /images//file
  3)Notice the response is a 204

  Expected:
  A 404 response should be returned

  Actual:
  A 204 response is returned

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1441776/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1428007] Re: dnsmasq fail to start when not using namespace

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1428007

Title:
  dnsmasq fail to start when not using namespace

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  My openstack neutron don't use namespace. After getting the patch
  https://review.openstack.org/#/c/145829/, my dhcp-agent can't start up
  dnsmasq,  the log reports

  2015-03-04 00:46:27.081 23165 ERROR neutron.agent.dhcp.agent [-] Unable to 
enable dhcp for 0c4bcc8a-ed67-4782-8db5-af62318cd6bc.
  2015-03-04 00:46:27.081 23165 TRACE neutron.agent.dhcp.agent Traceback (most 
recent call last):
  2015-03-04 00:46:27.081 23165 TRACE neutron.agent.dhcp.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/dhcp/agent.py", line 112, in 
call_driver
  2015-03-04 00:46:27.081 23165 TRACE neutron.agent.dhcp.agent 
getattr(driver, action)(**action_kwargs)
  2015-03-04 00:46:27.081 23165 TRACE neutron.agent.dhcp.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py", line 207, in 
enable
  2015-03-04 00:46:27.081 23165 TRACE neutron.agent.dhcp.agent 
self.spawn_process()
  2015-03-04 00:46:27.081 23165 TRACE neutron.agent.dhcp.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py", line 375, in 
spawn_process
  2015-03-04 00:46:27.081 23165 TRACE neutron.agent.dhcp.agent 
self._spawn_or_reload_process(reload_with_HUP=False)
  2015-03-04 00:46:27.081 23165 TRACE neutron.agent.dhcp.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py", line 394, in 
_spawn_or_reload_process
  2015-03-04 00:46:27.081 23165 TRACE neutron.agent.dhcp.agent 
pid_file=pid_filename)
  2015-03-04 00:46:27.081 23165 TRACE neutron.agent.dhcp.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/external_process.py", 
line 179, in enable
  2015-03-04 00:46:27.081 23165 TRACE neutron.agent.dhcp.agent 
process_manager.enable(reload_cfg=reload_cfg)
  2015-03-04 00:46:27.081 23165 TRACE neutron.agent.dhcp.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/external_process.py", 
line 76, in enable
  2015-03-04 00:46:27.081 23165 TRACE neutron.agent.dhcp.agent 
ip_wrapper.netns.execute(cmd, addl_env=self.cmd_addl_env)
  2015-03-04 00:46:27.081 23165 TRACE neutron.agent.dhcp.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 554, in 
execute
  2015-03-04 00:46:27.081 23165 TRACE neutron.agent.dhcp.agent return 
utils.execute(cmd, check_exit_code=check_exit_code,
  2015-03-04 00:46:27.081 23165 TRACE neutron.agent.dhcp.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 89, in 
execute
  2015-03-04 00:46:27.081 23165 TRACE neutron.agent.dhcp.agent # 
NOTE(termie): this appears to be necessary to let the subprocess
  2015-03-04 00:46:27.081 23165 TRACE neutron.agent.dhcp.agent RuntimeError:
  2015-03-04 00:46:27.081 23165 TRACE neutron.agent.dhcp.agent Command: 
['dnsmasq', '--no-hosts', '--no
  -resolv', '--strict-order', '--bind-interfaces', 
'--interface=tap8dd394fb-8b', '--except-interface=lo
  ', 
'--pid-file=/var/lib/neutron/dhcp/0c4bcc8a-ed67-4782-8db5-af62318cd6bc/pid', 
'--dhcp-hostsfile=/va
  r/lib/neutron/dhcp/0c4bcc8a-ed67-4782-8db5-af62318cd6bc/host', 
'--addn-hosts=/var/lib/neutron/dhcp/0c
  4bcc8a-ed67-4782-8db5-af62318cd6bc/addn_hosts', 
'--dhcp-optsfile=/var/lib/neutron/dhcp/0c4bcc8a-ed67-
  4782-8db5-af62318cd6bc/opts', '--leasefile-ro', '--dhcp-authoritative', 
'--dhcp-range=set:tag0,10.0.1
  .0,static,86400s', '--dhcp-lease-max=256', '--conf-file=', 
'--domain=openstacklocal']

  Although the "root_helper" has been moved, dnsmasq still needs root
  privilege to start.

  In https://review.openstack.org/#/c/145829/22/neutron/agent/linux/ip_lib.py
  The old code will pass root_helper through class IPWrapper, so the command is 
always run by root privilege. 
  The new code, however, just set run_as_root=True when namespace is used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1428007/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427228] Re: Allow to run neutron-ns-metadata-proxy as nobody

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1427228

Title:
  Allow to run neutron-ns-metadata-proxy as nobody

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  Currently neutron-ns-metadata-proxy runs with neutron user/group
  permissions on l3-agent but we should allow to run it with less
  permissions as neutron user is allowed to run neutron-rootwrap. We
  should restrict as much as possible neutron-ns-metadata-proxy
  permissions as it's reachable from VMs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1427228/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1426121] Re: vmw nsx: add/remove interface on dvr is broken

2015-04-09 Thread Adam Gandelman
** Changed in: neutron/juno
   Importance: High => Undecided

** Changed in: neutron/juno
   Status: Fix Released => Fix Committed

** Changed in: neutron/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1426121

Title:
  vmw nsx: add/remove interface on dvr is broken

Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in neutron juno series:
  Fix Committed
Status in VMware NSX:
  Fix Committed

Bug description:
  When the NSX specific extension was dropped in favour of the community
  one, there was a side effect that unfortunately caused add/remove
  interface operations to fail when executed passing a subnet id.

  This should be fixed soon and backported to Juno.
  Icehouse is not affected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1426121/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416308] Re: remove_router_interface need to improve its validate to avoid 500 DBError

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

** Changed in: neutron
Milestone: None => kilo-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1416308

Title:
  remove_router_interface need to improve its validate to avoid 500
  DBError

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  500 DBError should not be returned in a user operation.

  [User operation]
  
  curl -i -X PUT -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" 
http://192.168.122.201:9696/v2.0/routers/edff6799-7f1b-4d9e-bc8e-115cd22afd82/remove_router_interface
 -d '{"id":"edff6799-7f1b-4d9e-remove_router_interface"}'
  HTTP/1.1 500 Internal Server Error
  Content-Type: application/json; charset=UTF-8
  Content-Length: 150
  X-Openstack-Request-Id: req-31c11f44-73c4-47bf-bbc3-a3b0eb41148d
  Date: Fri, 30 Jan 2015 06:30:26 GMT

  {"NeutronError": {"message": "Request Failed: internal server error while 
processing your request.", "type": "HTTPInternalServerError", "detail": ""}}
  

  When subnet_id or port_id is not defined in the body of the above RESR API 
request, 400 Error should be returned.
  However, the varidate for this is not enough and cause a 500 DB Error.

  [TraceLog]
  
  2015-01-28 21:37:16.956 2589 ERROR neutron.api.v2.resource 
[req-c65b69cd-9a1b-4083-b85f-ea98030f5022 None] add_router_interface failed
  2015-01-28 21:37:16.956 2589 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2015-01-28 21:37:16.956 2589 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/api/v2/resource.py", line 87, in 
resource
  2015-01-28 21:37:16.956 2589 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-01-28 21:37:16.956 2589 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/api/v2/base.py", line 194, in 
_handle_action
  2015-01-28 21:37:16.956 2589 TRACE neutron.api.v2.resource return 
getattr(self._plugin, name)(*arg_list, **kwargs)
  2015-01-28 21:37:16.956 2589 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/db/l3_db.py", line 367, in 
add_router_interface
  2015-01-28 21:37:16.956 2589 TRACE neutron.api.v2.resource 'tenant_id': 
subnet['tenant_id'],
  2015-01-28 21:37:16.956 2589 TRACE neutron.api.v2.resource UnboundLocalError: 
local variable 'subnet' referenced before assignment
  2015-01-28 21:37:16.956 2589 TRACE neutron.api.v2.resource
  

  [About fix]
  A varidate like in add_router_interface should be good enough.
  See add_router_interface is returning 400 correctly.
  
  curl -i -X PUT -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" 
http://192.168.122.201:9696/v2.0/routers/139d6962-0919-444a-8ee4-8e47e35f054b/add_router_interface
 -d '{"router":{"name":"test_router"}}'
  HTTP/1.1 400 Bad Request
  Content-Type: application/json; charset=UTF-8
  Content-Length: 134
  X-Openstack-Request-Id: req-bca4b0ca-9e3a-4171-b783-ccc9a8c1cb80
  Date: Thu, 29 Jan 2015 02:20:39 GMT

  {"NeutronError": {"message": "Bad router request: Either subnet_id or port_id 
must be specified", "type": "BadRequest", "detail": ""}}
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1416308/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1418187] Re: _get_host_numa_topology assumes numa cell has memory

2015-04-09 Thread Adam Gandelman
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Importance: Undecided => Medium

** Changed in: nova/juno
   Status: New => Fix Committed

** Changed in: nova/juno
Milestone: None => 2014.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1418187

Title:
  _get_host_numa_topology assumes numa cell has memory

Status in OpenStack Compute (Nova):
  Confirmed
Status in OpenStack Compute (nova) juno series:
  Fix Committed
Status in nova package in Ubuntu:
  Confirmed

Bug description:
  numa cells are not guaranteed to have memory.
  libvirt capabilities represent that correctly.
  nova's _get_host_numa_topology assumes that it can convert cell's memory to
  kilobytes via: 
 memory=cell.memory / units.Ki.

  but cell.memory ends up being None. for some
  LibvirtConfigCapsNUMACell.

  stack trace is like this:
  [-] unsupported operand type(s) for /: 'NoneType' and 'int'
  Traceback (most recent call last):
File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py", line 
145, in wait
  x.wait()
File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py", line 
47, in wait
  return self.thread.wait()
File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 173, 
in wait
  return self._exit_event.wait()
File "/usr/lib/python2.7/dist-packages/eventlet/event.py", line 121, in wait
  return hubs.get_hub().switch()
File "/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 293, in 
switch
  return self.greenlet.switch()
File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 212, 
in main
  result = function(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/nova/openstack/common/service.py", 
line 492, in run_service
  service.start()
File "/usr/lib/python2.7/dist-packages/nova/service.py", line 181, in start
  self.manager.pre_start_hook()
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1188, 
in pre_start_hook
  self.update_available_resource(nova.context.get_admin_context())
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6047, 
in update_available_resource
  rt.update_available_resource(context)
File "/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", 
line 313, in update_available_resource
  resources = self.driver.get_available_resource(self.nodename)
File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
4825, in get_available_resource
  numa_topology = self._get_host_numa_topology()
File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
4703, in _get_host_numa_topology
  for cell in topology.cells])
  TypeError: unsupported operand type(s) for /: 'NoneType' and 'int'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1418187/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401799] Re: Attach Volume to instance running on KVM(RHEL7.0) fails for HP 3PARFC/3PARISCSI volumes

2015-04-09 Thread Thierry Carrez
** Changed in: cinder
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1401799

Title:
  Attach Volume to instance running on KVM(RHEL7.0) fails for HP
  3PARFC/3PARISCSI  volumes

Status in Cinder:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Committed

Bug description:
  While trying to attach HP 3PAR FC/iSCSI volumes to instance running on
  KVM(RHEL 7.0), libvirt fails with the below message.

  -
  if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() failed', 
dom=self)
  libvirtError: Failed to open file 
'/dev/mapper/360002ac00128943e': No such file or directory
  -

  Find attached the compute log.

  The below call from attach_volume(nova/virt/libvirt/driver.py) call fails.
  virt_dom.attachDeviceFlags(conf.to_xml(), flags)

  Further debugging the problem, i observe that "attachDeviceFlags" to
  libvirt is returns -1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1401799/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1442031] [NEW] network id is not present in update network modal

2015-04-09 Thread Masco Kaliyamoorthy
Public bug reported:

in update network modal, only network name is shown, since duplicate name 
allowed, i t is difficult to differentiate.
adding network id will be helpful.

** Affects: horizon
 Importance: Undecided
 Assignee: Masco Kaliyamoorthy (masco)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Masco Kaliyamoorthy (masco)

** Summary changed:

- network id is not present in update network table
+ network id is not present in update network modal

** Description changed:

- in update network model, only network name is shown, since duplicate name 
allowed, i t is difficult to differentiate.
+ in update network modal, only network name is shown, since duplicate name 
allowed, i t is difficult to differentiate.
  adding network id will be helpful.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1442031

Title:
  network id is not present in update network modal

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  in update network modal, only network name is shown, since duplicate name 
allowed, i t is difficult to differentiate.
  adding network id will be helpful.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1442031/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1442320] [NEW] Attempting to delete an image member from an image, passing a blank member_id returns a 404 response

2015-04-09 Thread Luke Wollney
Public bug reported:

Overview:
When attempting to delete an image member passing a blank member_id, a 404 
("The resource could not be found") response is returned.

Steps to reproduce:
1) Create a new image
2) Add an image member
3) Attempt to delete the image member passing '' for the member_id
4) Notice that a 404 response is returned

Expected:
Like the create image member response, if passing a blank member_id, a 400 
("Member can't be empty") response should be returned.

Actual:
A 404 ("The resource could not be found") response is returned

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1442320

Title:
  Attempting to delete an image member from an image, passing a blank
  member_id returns a 404 response

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Overview:
  When attempting to delete an image member passing a blank member_id, a 404 
("The resource could not be found") response is returned.

  Steps to reproduce:
  1) Create a new image
  2) Add an image member
  3) Attempt to delete the image member passing '' for the member_id
  4) Notice that a 404 response is returned

  Expected:
  Like the create image member response, if passing a blank member_id, a 400 
("Member can't be empty") response should be returned.

  Actual:
  A 404 ("The resource could not be found") response is returned

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1442320/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1437124] Re: API test_update_agent_description modifies agent assumed unchanged by test_list_agent

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

** Changed in: neutron
Milestone: None => kilo-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1437124

Title:
  API test_update_agent_description modifies agent assumed unchanged by
  test_list_agent

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  API test_update_agent_description modifies the description of  the
  agent identified by self.agent['id']. If that test case is run before
  test_list_agent, then the latter will fail because it compares with
  the original description:

  2015-03-18 14:47:53.271 | 2015-03-18 14:47:53.237 | 
  2015-03-18 14:47:53.273 | 2015-03-18 14:47:53.239 | Captured traceback:
  2015-03-18 14:47:53.275 | 2015-03-18 14:47:53.240 | ~~~
  2015-03-18 14:47:53.275 | 2015-03-18 14:47:53.242 | Traceback (most 
recent call last):
  2015-03-18 14:47:53.277 | 2015-03-18 14:47:53.243 |   File 
"neutron/tests/tempest/api/network/admin/test_agent_management.py", line 43, in 
test_list_agent
  2015-03-18 14:47:53.302 | 2015-03-18 14:47:53.244 | 
self.assertIn(self.agent, agents)
  2015-03-18 14:47:53.303 | 2015-03-18 14:47:53.246 |   File 
"/opt/stack/new/neutron/.tox/api/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 356, in assertIn
  2015-03-18 14:47:53.304 | 2015-03-18 14:47:53.251 | 
self.assertThat(haystack, Contains(needle), message)
  2015-03-18 14:47:53.304 | 2015-03-18 14:47:53.254 |   File 
"/opt/stack/new/neutron/.tox/api/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 435, in assertThat
  2015-03-18 14:47:53.305 | 2015-03-18 14:47:53.255 | raise 
mismatch_error
  2015-03-18 14:47:53.306 | 2015-03-18 14:47:53.257 | 
testtools.matchers._impl.MismatchError: {u'alive': True, u'started_at': 
u'2015-03-18 14:43:45', u'admin_state_up': True, u'binary': 
u'neutron-dhcp-agent', u'host': 
u'devstack-trusty-rax-ord-1343601.slave.openstack.org', u'created_at': 
u'2015-03-18 14:43:45', u'description': u'description for update agent.', 
u'topic': u'dhcp_agent', u'agent_type': u'DHCP agent', u'id': 
u'25ea8e00-36e8-4408-9b00-d60e8b3a9b39'} not in [{u'alive': True, 
u'started_at': u'2015-03-18 14:43:45', u'admin_state_up': True, u'binary': 
u'neutron-dhcp-agent', u'host': 
u'devstack-trusty-rax-ord-1343601.slave.openstack.org', u'created_at': 
u'2015-03-18 14:43:45', u'description': u'', u'topic': u'dhcp_agent', 
u'agent_type': u'DHCP agent', u'id': u'25ea8e00-36e8-4408-9b00-d60e8b3a9b39'}, 
{u'alive': True, u'started_at': u'2015-03-18 14:43:45', u'admin_state_up': 
True, u'binary': u'neutron-lbaas-agent', u'host': 
u'devstack-trusty-rax-ord-1343601.slave.openstack.o
 rg', u'created_at': u'2015-03-18 14:43:45', u'description': None, u'topic': 
u'n-lbaas_agent', u'agent_type': u'Loadbalancer agent', u'id': 
u'667de509-9eb8-4350-9f42-9a8511b85e06'}, {u'alive': True, u'started_at': 
u'2015-03-18 14:43:45', u'admin_state_up': True, u'binary': 
u'neutron-openvswitch-agent', u'host': 
u'devstack-trusty-rax-ord-1343601.slave.openstack.org', u'created_at': 
u'2015-03-18 14:43:45', u'description': None, u'topic': u'N/A', u'agent_type': 
u'Open vSwitch agent', u'id': u'68006b4f-4950-46d6-b660-ddbe23c9b30c'}, 
{u'alive': True, u'started_at': u'2015-03-18 14:43:45', u'admin_state_up': 
True, u'binary': u'neutron-metering-agent', u'host': 
u'devstack-trusty-rax-ord-1343601.slave.openstack.org', u'created_at': 
u'2015-03-18 14:43:45', u'description': None, u'topic': u'metering_agent', 
u'agent_type': u'Metering agent', u'id': 
u'8f0f8c3a-b8fe-41c8-ad3a-191e26dd71a4'}, {u'alive': True, u'started_at': 
u'2015-03-18 14:43:45', u'admin_state_up': True, u'binary': u'neutron-l3-a
 gent', u'host': u'devstack-trusty-rax-ord-1343601.slave.openstack.org', 
u'created_at': u'2015-03-18 14:43:45', u'description': None, u'topic': 
u'l3_agent', u'agent_type': u'L3 agent', u'id': 
u'f1ed957b-68ed-4669-9c85-84cac8095561'}, {u'alive': True, u'started_at': 
u'2015-03-18 14:43:45', u'admin_state_up': True, u'binary': 
u'neutron-metadata-agent', u'host': 
u'devstack-trusty-rax-ord-1343601.slave.openstack.org', u'created_at': 
u'2015-03-18 14:43:45', u'description': None, u'topic': u'N/A', u'agent_type': 
u'Metadata agent', u'id': u'fa4d51c9-5bbc-4a0b-b6f7-22f60eef742f'}]

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1437124/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244589] Re: virtual machine can not get DHCP lease due packet has no checksum

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1244589

Title:
  virtual machine can not get DHCP lease due packet has no checksum

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  if virtual machie are using virtio driver and swith vhost_net on, then
  virutal machine can not get DHCP lease bacause the DHCP packet has no
  checksum and the kernel of virtual machine will drop those packet. So
  we should fill checksum before we pass the DHCP packet to virtual
  machine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1244589/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439010] Re: Encrypted key is not properly handled

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

** Changed in: neutron
Milestone: None => kilo-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1439010

Title:
  Encrypted key is not properly handled

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  Haproxy cannot load an encrypted private key. There should be a
  mechanism to strip the key with the provided passphrase and this un-
  encrypted key should be loaded.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1439010/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1434401] Re: ML2 mechanism driver for Cisco UCS Manager

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1434401

Title:
  ML2 mechanism driver for Cisco UCS Manager

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  Add a new shim layer for the new ML2 mechanism driver for Cisco UCS
  Manager.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1434401/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1438738] Re: Nova boot fails with Error Code 500, if quota_port is < -1 in neutron.conf

2015-04-09 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1438738

Title:
  Nova boot fails with Error Code 500,  if quota_port is < -1 in
  neutron.conf

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  I am running a scale environment where I needed to exceed the port
  quota count to unlimited.

  The neutron.conf documentation for the quota_port parameter states the
  following:

  # Number of ports allowed per tenant. A negative value means
  unlimited.

  Looking at this, I had set the value to -2 as:

  quota_port = -2

  After this, the nova boot started failing with Error code 500.

  The error stack is the following:

   TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/servers.py", line 
1016, in create
   TRACE nova.api.openstack server = self._view_builder.create(req, 
instances[0])
   TRACE nova.api.openstack IndexError: list index out of range
   TRACE nova.api.openstack

  The error is seen because in the /nova/network/neutronv2/api.py -
  validate_networks() method, the quota check is very strict in terms
  for the unlimited check:

  if quotas.get('port', -1) == -1:
  # Unlimited Port Quota
  return num_instances
  else:
  free_ports = quotas.get('port') - len(ports)
  ports_needed = ports_needed_per_instance * num_instances
  if free_ports >= ports_needed:
  return num_instances
  else:
  return free_ports // ports_needed_per_instance

  The above code would return free_ports - and that value is negative.

  Filing this bug to change the above check to something like:

  if quotas.get('port', -1) <= -1:
  # Unlimited Port Quota
  return num_instances

  This will allow nova to react correctly with the documentation in
  neutron as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1438738/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >