[Yahoo-eng-team] [Bug 1447344] Re: DHCP agent: metadata network broken for DVR

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1447344

Title:
  DHCP agent: metadata network broken for DVR

Status in neutron:
  Fix Released
Status in neutron juno series:
  New
Status in neutron kilo series:
  New

Bug description:
  When the 'metadata network' feature is enabled, the DHCP at [1] will not 
spawn a metadata proxy for DVR routers.
  This should be fixed.

  [1]
  
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/agent/dhcp/agent.py#n357

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1447344/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444397] Re: single allowed address pair rule can exhaust entire ipset space

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1444397

Title:
  single allowed address pair rule can exhaust entire ipset space

Status in neutron:
  Fix Released
Status in neutron juno series:
  New
Status in neutron kilo series:
  Fix Released

Bug description:
  The hash type used by the ipsets is 'ip' which explodes a CIDR into
  every member address (i.e. 10.100.0.0/16 becomes 65k entries). The
  allowed address pairs extension allows CIDRs so a single allowed
  address pair set can exhaust the entire IPset and break the security
  group rules for a tenant.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1444397/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424096] Re: DVR routers attached to shared networks aren't being unscheduled from a compute node after deleting the VMs using the shared net

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1424096

Title:
  DVR routers attached to shared networks aren't being unscheduled from
  a compute node after deleting the VMs using the shared net

Status in neutron:
  Fix Released
Status in neutron juno series:
  New
Status in neutron kilo series:
  New

Bug description:
  As the administrator, a DVR router is created and attached to a shared
  network. The administrator also created the shared network.

  As a non-admin tenant, a VM is created with the port using the shared
  network.  The only VM using the shared network is scheduled to a
  compute node.  When the VM is deleted, it is expected the qrouter
  namespace of the DVR router is removed.  But it is not.  This doesn't
  happen with routers attached to networks that are not shared.

  The environment consists of 1 controller node and 1 compute node.

  Routers having the problem are created by the administrator attached
  to shared networks that are also owned by the admin:

  As the administrator, do the following commands on a setup having 1
  compute node and 1 controller node:

  1. neutron net-create shared-net -- --shared True
 Shared net's uuid is f9ccf1f9-aea9-4f72-accc-8a03170fa242.

  2. neutron subnet-create --name shared-subnet shared-net 10.0.0.0/16

  3. neutron router-create shared-router
  Router's UUID is ab78428a-9653-4a7b-98ec-22e1f956f44f.

  4. neutron router-interface-add shared-router shared-subnet
  5. neutron router-gateway-set  shared-router public

  
  As a non-admin tenant (tenant-id: 95cd5d9c61cf45c7bdd4e9ee52659d13), boot a 
VM using the shared-net network:

  1. neutron net-show shared-net
  +-+--+
  | Field   | Value|
  +-+--+
  | admin_state_up  | True |
  | id  | f9ccf1f9-aea9-4f72-accc-8a03170fa242 |
  | name| shared-net   |
  | router:external | False|
  | shared  | True |
  | status  | ACTIVE   |
  | subnets | c4fd4279-81a7-40d6-a80b-01e8238c1c2d |
  | tenant_id   | 2a54d6758fab47f4a2508b06284b5104 |
  +-+--+

  At this point, there are no VMs using the shared-net network running
  in the environment.

  2. Boot a VM that uses the shared-net network: nova boot ... --nic 
net-id=f9ccf1f9-aea9-4f72-accc-8a03170fa242 ... vm_sharednet
  3. Assign a floating IP to the VM "vm_sharednet"
  4. Delete "vm_sharednet". On the compute node, the qrouter namespace of the 
shared router (qrouter-ab78428a-9653-4a7b-98ec-22e1f956f44f) is left behind

  stack@DVR-CN2:~/DEVSTACK/manage$ ip netns
  qrouter-ab78428a-9653-4a7b-98ec-22e1f956f44f
   ...

  
  This is consistent with the output of "neutron l3-agent-list-hosting-router" 
command.  It shows the router is still being hosted on the compute node.

  
  $ neutron l3-agent-list-hosting-router ab78428a-9653-4a7b-98ec-22e1f956f44f
  
+--+++---+
  | id   | host   | admin_state_up | 
alive |
  
+--+++---+
  | 42f12eb0-51bc-4861-928a-48de51ba7ae1 | DVR-Controller | True   | 
:-)   |
  | ff869dc5-d39c-464d-86f3-112b55ec1c08 | DVR-CN2| True   | 
:-)   |
  
+--+++---+

  Running the "neutron l3-agent-router-remove" command removes the
  qrouter namespace from the compute node:

  $ neutron l3-agent-router-remove ff869dc5-d39c-464d-86f3-112b55ec1c08 
ab78428a-9653-4a7b-98ec-22e1f956f44f
  Removed router ab78428a-9653-4a7b-98ec-22e1f956f44f from L3 agent

  stack@DVR-CN2:~/DEVSTACK/manage$ ip netns
  stack@DVR-CN2:~/DEVSTACK/manage$

  This is a workaround to get the qrouter namespace deleted from the
  compute node. The L3-agent scheduler should have removed the router
  from the compute node when the VM is deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1424096/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439817] Re: IP set full error in kernel log

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1439817

Title:
  IP set full error in kernel log

Status in neutron:
  Fix Released
Status in neutron juno series:
  New

Bug description:
  This is appearing in some logs upstream:
  http://logs.openstack.org/73/170073/1/experimental/check-tempest-dsvm-
  neutron-full-non-
  isolated/ac882e3/logs/kern_log.txt.gz#_Apr__2_13_03_06

  And it has also been reported by andreaf in IRC as having been
  observed downstream.

  Logstash is not very helpful as this manifests only with a job currently in 
the experimental queue.
  As said job runs in non-isolated mode, accruing of elements in the IPset 
until it reaches saturation is onet things that might need to be investigated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1439817/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1442494] Re: test_add_list_remove_router_on_l3_agent race-y for dvr

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1442494

Title:
  test_add_list_remove_router_on_l3_agent race-y for dvr

Status in neutron:
  Fix Released
Status in neutron juno series:
  New
Status in neutron kilo series:
  Fix Released

Bug description:
  Logstash:

  message:"in test_add_list_remove_router_on_l3_agent" AND build_name
  :"check-tempest-dsvm-neutron-dvr"

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiaW4gdGVzdF9hZGRfbGlzdF9yZW1vdmVfcm91dGVyX29uX2wzX2FnZW50XCIgQU5EIGJ1aWxkX25hbWU6XCJjaGVjay10ZW1wZXN0LWRzdm0tbmV1dHJvbi1kdnJcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQyODY0OTgxNDY3MSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

  Change [1], enabled by [2], exposed an intermittent failure when
  determining whether an agent is eligible for binding or not.

  [1] https://review.openstack.org/#/c/154289/
  [2] https://review.openstack.org/#/c/165246/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1442494/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441523] Re: changing flavor details on running instances will result in errors popping up for users

2015-11-14 Thread Alan Pevec
** Also affects: horizon/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1441523

Title:
  changing flavor details on running instances will result in errors
  popping up for users

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  New

Bug description:
  1. Install/use and all-in-one w/ demo project
  2. As admin, create a flavor and assign to the demo project
  3. Log out as admin and log in as demo (must not have admin privs)
  4. As demo, launch an instance on this flavor in the demo project
  5. Log out as demo and log in as admin
  6. As admin, change the amount of RAM for the flavor
  7. Log out as admin, log in as demo
  8. Check the instances page and size should show "Not available" and there 
should be an error in the upper right saying "Error: Unable to retrieve 
instance size information."

  The error is only shown for non-admin users.

  what happens here: 
  when editing flavors, nova silently deletes the old flavor, creating a new 
one. running instances are not touched. the old flavor is marked as deleted, 
and normal users can not get specifics of that flavor any more.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1441523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1438040] Re: fdb entries can't be removed when a VM is migrated

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1438040

Title:
  fdb entries can't be removed when a VM is migrated

Status in neutron:
  Fix Released
Status in neutron juno series:
  New
Status in neutron kilo series:
  Fix Released

Bug description:
  this problem can be reprodeced as bellow:
  1. vm A in computeA, vm B in computeB, l2 pop enable;
  2. vmB continue ping vmA 
  3. live migrate vmA to computeB 
  4. when live-migrate finish, vmB ping vmA failed

  the reason is bellow, in l2pop driver, when vmA migrate to computeB, port 
status change form BUILD to ACTIVE,
  it add the port to  self.migrated_ports when port status is ACTIVE, but 
'remove_fdb_entries' in port status is BUILD :
  def update_port_postcommit(self, context):
  ...
  ...
  elif (context.host != context.original_host
  and context.status == const.PORT_STATUS_ACTIVE
  and not self.migrated_ports.get(orig['id'])):
  # The port has been migrated. We have to store the original
  # binding to send appropriate fdb once the port will be set
  # on the destination host
  self.migrated_ports[orig['id']] = (
  (orig, context.original_host))
  elif context.status != context.original_status:
  if context.status == const.PORT_STATUS_ACTIVE:
  self._update_port_up(context)
  elif context.status == const.PORT_STATUS_DOWN:
  fdb_entries = self._update_port_down(
  context, port, context.host)
  self.L2populationAgentNotify.remove_fdb_entries(
  self.rpc_ctx, fdb_entries)
  elif context.status == const.PORT_STATUS_BUILD:
  orig = self.migrated_ports.pop(port['id'], None)
  if orig:
  original_port = orig[0]
  original_host = orig[1]
  # this port has been migrated: remove its entries from fdb
  fdb_entries = self._update_port_down(
  context, original_port, original_host)
  self.L2populationAgentNotify.remove_fdb_entries(
  self.rpc_ctx, fdb_entries)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1438040/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439857] Re: live-migration failure leave the port to BUILD state

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1439857

Title:
  live-migration failure leave the port to BUILD state

Status in neutron:
  Fix Released
Status in neutron juno series:
  New
Status in neutron kilo series:
  Fix Released

Bug description:
  I've set up a lab where live migration can occur in block mode

  It seems that if I leave the default config, block live-migration
  fails;

  I can see that the port is left in BUILD state after the failure, but
  the VM is still running on the source host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1439857/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445412] Re: performance of plugin_rpc.get_routers is bad

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1445412

Title:
  performance of plugin_rpc.get_routers is bad

Status in neutron:
  Fix Released
Status in neutron juno series:
  New
Status in neutron kilo series:
  New

Bug description:
  the get_routers plugin call that the l3 agent makes is serviced by a
  massive amount of SQL queries that lead the whole process to take on
  the order of hundreds of milliseconds to process a request for 10
  routers.

  This will be a blanket bug for a series of performance improvements
  that will reduce that time by at least an order of magnitude.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1445412/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376586] Re: pre_live_migration is missing some disk information in case of block migration

2015-11-14 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1376586

Title:
  pre_live_migration is missing some disk information in case of block
  migration

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  New

Bug description:
  The pre_live_migration API is called with a disk retrieved by a call
  to driver.get_instance_disk_info when doing a block migration.
  Unfortunately block device information is not passed, so Nova is
  calling LibvirtDriver._create_images_and_backing with partial
  disk_info.

  As a result, for example when migrating a volume with a NFS volume
  attached, a useless file is created in the instance directory.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1376586/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1379212] Re: Attaching volume to iso instance is failure because of duplicate device name 'hda'.

2015-11-14 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1379212

Title:
  Attaching volume to iso instance is failure because of duplicate
  device name 'hda'.

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  New

Bug description:
  I try to attach a volume to iso instance, return code of volume-attach
  api is 200 ok, but the volume can't be attached to instance in fact,
  there are some error messages in nova-compute.log like this
  'libvirtError: Requested operation is not valid: target hda already
  exists'.

  The root device of iso instance is hda, nova-compute should not assign
  hda to cinder volume again.

  The following is reproduce steps:

  1. boot instance from iso image.
  2. create a cinder volume.
  3. try to attach the volume to iso instance.

  Attaching volume is failed, I can find libvirt error in nova-
  compute.log.

  http://paste.openstack.org/show/105144/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1379212/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1383345] Re: PCI-Passthrough : TypeError: pop() takes at most 1 argument (2 given

2015-11-14 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1383345

Title:
  PCI-Passthrough : TypeError: pop() takes at most 1 argument (2 given

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  New

Bug description:
  Setting the below causes nova to fail.

  # White list of PCI devices available to VMs. For example:
  # pci_passthrough_whitelist =  [{"vendor_id": "8086",
  # "product_id": "0443"}] (multi valued)
  #pci_passthrough_whitelist=
  pci_passthrough_whitelist=[{"vendor_id":"8086","product_id":"10fb"}]

  Fails with :
  CRITICAL nova [-] TypeError: pop() takes at most 1 argument (2 given) 
  2014-10-17 15:28:59.968 7153 CRITICAL nova [-] TypeError: pop() takes at most 
1 argument (2 given)
  2014-10-17 15:28:59.968 7153 TRACE nova Traceback (most recent call last):
  2014-10-17 15:28:59.968 7153 TRACE nova   File "/usr/bin/nova-compute", line 
10, in 
  2014-10-17 15:28:59.968 7153 TRACE nova sys.exit(main())
  2014-10-17 15:28:59.968 7153 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/cmd/compute.py", line 72, in main
  2014-10-17 15:28:59.968 7153 TRACE nova 
db_allowed=CONF.conductor.use_local)
  2014-10-17 15:28:59.968 7153 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/service.py", line 275, in create
  2014-10-17 15:28:59.968 7153 TRACE nova db_allowed=db_allowed)
  2014-10-17 15:28:59.968 7153 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/service.py", line 148, in __init__
  2014-10-17 15:28:59.968 7153 TRACE nova self.manager = 
manager_class(host=self.host, *args, **kwargs)
  2014-10-17 15:28:59.968 7153 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 631, in 
__init__
  2014-10-17 15:28:59.968 7153 TRACE nova self.driver = 
driver.load_compute_driver(self.virtapi, compute_driver)
  2014-10-17 15:28:59.968 7153 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/virt/driver.py", line 1402, in 
load_compute_driver
  2014-10-17 15:28:59.968 7153 TRACE nova virtapi)
  2014-10-17 15:28:59.968 7153 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/importutils.py", line 
50, in import_object_ns
  2014-10-17 15:28:59.968 7153 TRACE nova return 
import_class(import_value)(*args, **kwargs)
  2014-10-17 15:28:59.968 7153 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 405, in 
__init__
  2014-10-17 15:28:59.968 7153 TRACE nova self.dev_filter = 
pci_whitelist.get_pci_devices_filter()
  2014-10-17 15:28:59.968 7153 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/pci/pci_whitelist.py", line 88, in 
get_pci_devices_filter
  2014-10-17 15:28:59.968 7153 TRACE nova return 
PciHostDevicesWhiteList(CONF.pci_passthrough_whitelist)
  2014-10-17 15:28:59.968 7153 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/pci/pci_whitelist.py", line 68, in 
__init__
  2014-10-17 15:28:59.968 7153 TRACE nova self.specs = 
self._parse_white_list_from_config(whitelist_spec)
  2014-10-17 15:28:59.968 7153 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/pci/pci_whitelist.py", line 49, in 
_parse_white_list_from_config
  2014-10-17 15:28:59.968 7153 TRACE nova spec = 
pci_devspec.PciDeviceSpec(jsonspec)
  2014-10-17 15:28:59.968 7153 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/pci/pci_devspec.py", line 132, in 
__init__
  2014-10-17 15:28:59.968 7153 TRACE nova self._init_dev_details()
  2014-10-17 15:28:59.968 7153 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/pci/pci_devspec.py", line 137, in 
_init_dev_details
  2014-10-17 15:28:59.968 7153 TRACE nova self.vendor_id = 
details.pop("vendor_id", ANY)

  Changing the config to:
  pci_passthrough_whitelist={"vendor_id":"8086","product_id":"10fb"}

  Fixes the above.

  In Icehouse, PCI Passthrough worked with passing a list, in Juno it is
  broken.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1383345/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378132] Re: Hard-reboots ignore root_device_name

2015-11-14 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1378132

Title:
  Hard-reboots ignore root_device_name

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  New

Bug description:
  Hard-rebooting an instance causes the root_device_name to get
  ignored/reset, which can cause wailing and gnashing of teeth if the
  guest operating system is expecting it to not do that.

  Steps to reproduce:

  1. Stand up a devstack
  2. Load the openrc with admin credentials
  3. glance image-update --property root_device_name=sda SOME_CIRROS_IMAGE
  4. Spawn a cirros instance using the above image. The root filesystem should 
present as being mounted on /dev/sda1, and the libvirt.xml should show the disk 
with a target of "scsi"
  5. Hard-reboot the instance

  Expected Behaviour

  The instance comes back up with the same hardware configuration as it
  had when initially spawned, i.e., with its root filesystem attached to
  a SCSI bus

  Actual Behaviour

  The instance comes back with its root filesystem attached to an IDE
  bus.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1378132/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1406486] Re: Suspending an instance fails when using vnic_type=direct

2015-11-14 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1406486

Title:
  Suspending an instance fails when using vnic_type=direct

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  New
Status in python-glanceclient:
  New

Bug description:
  When launching an instance with a pre-created port with 
binding:vnic_type='direct' and suspending the instance 
  fails with error  'NoneType' object has no attribute 'encode'

  Nova compute log:
  http://paste.openstack.org/show/155141/

  Version
  ==
  openstack-nova-common-2014.2.1-3.el7ost.noarch
  openstack-nova-compute-2014.2.1-3.el7ost.noarch
  python-novaclient-2.20.0-1.el7ost.noarch
  python-nova-2014.2.1-3.el7ost.noarch

  How to Reproduce
  ===
  # neutron port-create tenant1-net1 --binding:vnic-type direct
  # nova boot --flavor m1.small --image rhel7 --nic port-id= vm1
  # nova suspend 
  # nova show 

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1406486/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398999] Re: Block migrate with attached volumes copies volumes to themselves

2015-11-14 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1398999

Title:
  Block migrate with attached volumes copies volumes to themselves

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) juno series:
  New
Status in libvirt package in Ubuntu:
  Fix Released
Status in nova package in Ubuntu:
  Triaged
Status in libvirt source package in Trusty:
  Confirmed
Status in nova source package in Trusty:
  Triaged
Status in libvirt source package in Utopic:
  Won't Fix
Status in nova source package in Utopic:
  Won't Fix
Status in libvirt source package in Vivid:
  Confirmed
Status in nova source package in Vivid:
  Triaged
Status in libvirt source package in Wily:
  Fix Released
Status in nova source package in Wily:
  Triaged

Bug description:
  When an instance with attached Cinder volumes is block migrated, the
  Cinder volumes are block migrated along with it. If they exist on
  shared storage, then they end up being copied, over the network, from
  themselves to themselves. At a minimum, this is horribly slow and de-
  sparses a sparse volume; at worst, this could cause massive data
  corruption.

  More details at http://lists.openstack.org/pipermail/openstack-
  dev/2014-June/038152.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1398999/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1383465] Re: [pci-passthrough] nova-compute fails to start

2015-11-14 Thread Alan Pevec
*** This bug is a duplicate of bug 1415768 ***
https://bugs.launchpad.net/bugs/1415768

** Also affects: nova/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1383465

Title:
  [pci-passthrough] nova-compute fails to start

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  New

Bug description:
  Created a guest using nova with a passthrough device, shutdown that
  guest, and disabled nova-compute (openstack-service stop). Went to
  turn things back on, and nova-compute fails to start.

  The trace:
  2014-10-20 16:06:45.734 48553 ERROR nova.openstack.common.threadgroup [-] PCI 
device request ({'requests': 
[InstancePCIRequest(alias_name='rook',count=2,is_new=False,request_id=None,spec=[{product_id='10fb',vendor_id='8086'}])],
 'code': 500}equests)s failed
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
Traceback (most recent call last):
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 
125, in wait
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
x.wait()
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 
47, in wait
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
return self.thread.wait()
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 173, in wait
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
return self._exit_event.wait()
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/eventlet/event.py", line 121, in wait
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
return hubs.get_hub().switch()
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 293, in switch
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
return self.greenlet.switch()
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 212, in main
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
result = function(*args, **kwargs)
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/service.py", line 492, 
in run_service
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
service.start()
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/service.py", line 181, in start
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
self.manager.pre_start_hook()
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1152, in 
pre_start_hook
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
self.update_available_resource(nova.context.get_admin_context())
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5949, in 
update_available_resource
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
rt.update_available_resource(context)
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 332, 
in update_available_resource
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
return self._update_available_resource(context, resources)
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py", line 
272, in inner
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
return f(*args, **kwargs)
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 349, 
in _update_available_resource
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup 
self._update_usage_from_instances(context, resources, instances)
  2014-10-20 16:06:45.734 48553 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 708, 
in _update_usage_from_instances
  2014-10-20 16:06:45.734 48553 TRACE 

[Yahoo-eng-team] [Bug 1399244] Re: rbd resize revert fails

2015-11-14 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1399244

Title:
  rbd resize revert fails

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  New

Bug description:
  In Ceph CI, the revert-resize server test is failing.  It appears that
  revert_resize() does not take shared storage into account and deletes
  the orignal volume, which causes the start of the original instance to
  fail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1399244/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1392316] Re: Hypervisors returns TemplateSyntaxError instead of error message

2015-11-14 Thread Alan Pevec
** Also affects: horizon/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1392316

Title:
  Hypervisors returns TemplateSyntaxError instead of error message

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  New

Bug description:
  When trying to list hypervisor at /admin/hypervisors/
  I got TemplateSyntaxError. It happens when novaclient (nova-api) 
  cannot fulfil the request.

  The exception in Horizon:

  Error while rendering table rows.
  Traceback (most recent call last):
File "/opt/stack/horizon/horizon/tables/base.py", line 1751, in get_rows
  for datum in self.filtered_data:
  TypeError: 'NoneType' object is not iterable
  Internal Server Error: /admin/hypervisors/
  Traceback (most recent call last):
...
File "/opt/stack/horizon/horizon/tables/base.py", line 1751, in get_rows
  for datum in self.filtered_data:
  TemplateSyntaxError: 'NoneType' object is not iterable

  
  IMO it should be more robust and just return error message. It would be more 
  consistent with how other views handles unavailable services.

  To reproduce the error it is enough that novaclient raise exception. Example 
for this 
  is my case was when zookeeper as servicegroup driver is used, but 
  nova-conductor hasn't yet prepared the required namespace (because of bug 
[1]) - which 
  ends that nova-api had internal error:

  nova.api.openstack ServiceGroupUnavailable: The service from servicegroup 
driver 
  ZooKeeperDriver is temporarily unavailable.

  This overall result is that whole hypervisor list page was
  unaccessible only because is was not possible to list nova services.

  [1] https://bugs.launchpad.net/nova/+bug/1389782

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1392316/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394051] Re: Can't display port list on a shared network in "Manage Floating IP Associations" page

2015-11-14 Thread Alan Pevec
** Also affects: horizon/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1394051

Title:
  Can't display port list on a shared network in "Manage Floating IP
  Associations" page

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  New

Bug description:
  
  I used below command to configure floating IP. Juno on CentOS 7.

  neutron net-create public --shared  --router:external True
  --provider:network_type vlan --provider:physical_network physnet2
  --provider:segmentation_id 125

  neutron subnet-create public --name public-subnet \
--allocation-pool start=125.2.249.170,end=125.2.249.248 \
--disable-dhcp --gateway 125.2.249.1 --dns-nameserver 125.1.166.20 
125.2.249.0/24

  neutron net-create  --shared OAM120 \
--provider:network_type vlan --provider:physical_network physnet2 
--provider:segmentation_id 120

  neutron subnet-create --name oam120-subnet \
--allocation-pool start=192.168.120.1,end=192.168.120.200 \
--gateway 192.168.120.254 --dns-nameserver 10.1.1.1 --dns-nameserver 
125.1.166.20 OAM120 192.168.120.0/24

  neutron router-create my-router

  neutron router-interface-add my-router oam120-subnet

  neutron router-gateway-set my-router public

  
  Just checked the dashborad code, It seems that there are some errors in below 
code.

  /usr/share/openstack-dashboard/openstack_dashboard/api/neutron.py
  def _get_reachable_subnets(self, ports):
  # Retrieve subnet list reachable from external network
  ext_net_ids = [ext_net.id for ext_net in self.list_pools()]
  gw_routers = [r.id for r in router_list(self.request)
if (r.external_gateway_info and
r.external_gateway_info.get('network_id')
in ext_net_ids)]
  reachable_subnets = set([p.fixed_ips[0]['subnet_id'] for p in ports
   if ((p.device_owner ==
'network:router_interface')
   and (p.device_id in gw_routers))])
  return reachable_subnets

  
  Why only list "device_owner = 'network:router_interface'", I guess it should 
list all "device_owner = 'compute:xxx'"

  Here is my work around:
  diff output:
  /usr/share/openstack-dashboard
  [root@jn-controller openstack-dashboard]# diff 
./openstack_dashboard/api/neutron.py.orig ./openstack_dashboard/api/neutron.py
  413,415c415
  <  if ((p.device_owner ==
  <   'network:router_interface')
  <  and (p.device_id in gw_routers))])
  ---
  >  if 
(p.device_owner.startswith('compute:'))])

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1394051/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1388764] Re: Horizon fail to load resources usage if ceilometer configured with SSL

2015-11-14 Thread Alan Pevec
** Also affects: horizon/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1388764

Title:
  Horizon fail to load resources usage if ceilometer configured with SSL

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  New

Bug description:
  When ceilometer is configured in SSL,  within Horizon configured as
  below for the local_setting

  OPENSTACK_SSL_CACERT=
  OPENSTACK_SSL_NO_VERIFY=false

  Horizon fail to load the meter-list from ceilometer while ceilometer
  will get the meter-list with the same cert via command line.

  
  With checking the code of Horizon. Found that, It uses the wrong arguments to 
pass the cacert: It should be 'cacert' instead of 'ca_file'

  https://github.com/openstack/python-
  ceilometerclient/blob/master/ceilometerclient/v2/client.py#L53

  In openstack_dashboard/api/ceilometer.py:
  @memoized
  def ceilometerclient(request):
  """Initialization of Ceilometer client."""

  endpoint = base.url_for(request, 'metering')
  insecure = getattr(settings, 'OPENSTACK_SSL_NO_VERIFY', False)
  cacert = getattr(settings, 'OPENSTACK_SSL_CACERT', None)
  return ceilometer_client.Client('2', endpoint,
  token=(lambda: request.user.token.id),
  insecure=insecure,
  ca_file=cacert)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1388764/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1389985] Re: CLI will fail one time after restarting DB

2015-11-14 Thread Alan Pevec
*** This bug is a duplicate of bug 1374497 ***
https://bugs.launchpad.net/bugs/1374497

** Also affects: ceilometer/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1389985

Title:
  CLI will fail one time after restarting DB

Status in Ceilometer:
  Fix Committed
Status in Ceilometer juno series:
  New
Status in Glance:
  In Progress
Status in OpenStack Identity (keystone):
  Incomplete
Status in OpenStack Compute (nova):
  Incomplete
Status in oslo.db:
  New

Bug description:
  After restarting database, the first command will fail. for example:
  after restarting Database, and wait for a few minutes.
  Then run heat stack-list, result will be like below:

  ERROR: Remote error: DBConnectionError (OperationalError) 
ibm_db_dbi::OperationalError: SQLNumResultCols failed: [IBM][CLI Driver] 
SQL30081N  A communication error has been detected. Communication protocol 
being used: "TCP/IP".  Communication API being used: "SOCKETS".  Location where 
the error was detected: "10.11.1.14".  Communication function detecting the 
error: "send".  Protocol specific error code(s): "2", "*", "*".  SQLSTATE=08001 
SQLCODE=-30081 'SELECT stack.status_reason AS stack_status_reason, 
stack.created_at AS stack_created_at, stack.deleted_at AS stack_deleted_at, 
stack.action AS stack_action, stack.status AS stack_status, stack.id AS 
stack_id, stack.name AS stack_name, stack.raw_template_id AS 
stack_raw_template_id, stack.username AS stack_username, stack.tenant AS 
stack_tenant, stack.parameters AS stack_parameters, stack.user_creds_id AS 
stack_user_creds_id, stack.owner_id AS stack_owner_id, stack.timeout AS 
stack_timeout, stack.disable_rollback AS stack_disable_rol
 lback, stack.stack_user_project_id AS stack_stack_user_project_id, 
stack.backup AS stack_backup, stack.updated_at AS stack_updated_at \nFROM stack 
\nWHERE stack.deleted_at IS NULL AND stack.owner_id IS NULL AND stack.tenant = 
? ORDER BY stack.created_at DESC, stack.id DESC' 
('a3a14c6f82bd4ce88273822407a0829b',)
  [u'Traceback (most recent call last):\n', u'  File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 134, 
in _dispatch_and_reply\nincoming.message))\n', u'  File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 177, 
in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, args)\n', u' 
 File "/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch\nresult = getattr(endpoint, method)(ctxt, 
**new_args)\n', u'  File 
"/usr/lib/python2.6/site-packages/heat/engine/service.py", line 69, in 
wrapped\nreturn func(self, ctx, *args, **kwargs)\n', u'  File 
"/usr/lib/python2.6/site-packages/heat/engine/service.py", line 490, in 
list_stacks\nreturn [api.format_stack(stack) for stack in stacks]\n', u'  
File "/usr/lib/python2.6/site-packages/heat/engine/stack.py", line 264, in 
load_all\nshow_deleted, show_nested) or []\n', u'  File 
"/usr/lib/python2.6/site-packages/heat/db/api.py", li
 ne 130, in stack_get_all\nshow_deleted, show_nested)\n', u'  File 
"/usr/lib/python2.6/site-packages/heat/db/sqlalchemy/api.py", line 368, in 
stack_get_all\nmarker, sort_dir, filters).all()\n', u'  File 
"/usr/lib64/python2.6/site-packages/sqlalchemy/orm/query.py", line 2241, in 
all\nreturn list(self)\n', u'  File 
"/usr/lib64/python2.6/site-packages/sqlalchemy/orm/query.py", line 2353, in 
__iter__\nreturn self._execute_and_instances(context)\n', u'  File 
"/usr/lib64/python2.6/site-packages/sqlalchemy/orm/query.py", line 2368, in 
_execute_and_instances\nresult = conn.execute(querycontext.statement, 
self._params)\n', u'  File 
"/usr/lib64/python2.6/site-packages/sqlalchemy/engine/base.py", line 662, in 
execute\nparams)\n', u'  File 
"/usr/lib64/python2.6/site-packages/sqlalchemy/engine/base.py", line 761, in 
_execute_clauseelement\ncompiled_sql, distilled_params\n', u'  File 
"/usr/lib64/python2.6/site-packages/sqlalchemy/engine/base.py", line 874, in 
_execute_c
 ontext\ncontext)\n', u'  File 
"/usr/lib/python2.6/site-packages/oslo/db/sqlalchemy/compat/handle_error.py", 
line 125, in _handle_dbapi_exception\nsix.reraise(type(newraise), newraise, 
sys.exc_info()[2])\n', u'  File 
"/usr/lib/python2.6/site-packages/oslo/db/sqlalchemy/compat/handle_error.py", 
line 102, in _handle_dbapi_exception\nper_fn = fn(ctx)\n', u'  File 
"/usr/lib/python2.6/site-packages/oslo/db/sqlalchemy/exc_filters.py", line 323, 
in handler\ncontext.is_disconnect)\n', u'  File 
"/usr/lib/python2.6/site-packages/oslo/db/sqlalchemy/exc_filters.py", line 263, 
in _is_db_connection_error\nraise 
exception.DBConnectionError(operational_error)\n', u'DBConnectionError: 
(OperationalError) ibm_db_dbi::OperationalError: SQLNumResultCols failed: 
[IBM][CLI 

[Yahoo-eng-team] [Bug 1374473] Re: 500 error on router-gateway-set for DVR on second external network

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1374473

Title:
  500 error on router-gateway-set for DVR on second external network

Status in neutron:
  Fix Released
Status in neutron juno series:
  New

Bug description:
  Under some circumstances this operation may fail.

  Steps to reproduce:

  1) Run Devstack with DVR *on* (devstack by default creates an external 
network and sets the gateway to the router)
  2) Create an external network
  3) Create a router
  4) Set the gateway to the router
  5) Observe the Internal Server Error

  Expected outcome: the gateway is correctly set.

  This occurs with the latest Juno code. The underlying error is an
  attempted double binding of the router to the L3 agent.

  More details in:

  http://paste.openstack.org/show/115614/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1374473/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362676] Re: Hyper-V agent doesn't create stateful security group rules

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362676

Title:
  Hyper-V agent doesn't create stateful security group rules

Status in networking-hyperv:
  Fix Released
Status in neutron:
  New
Status in neutron juno series:
  New

Bug description:
  Hyper-V agent does not create stateful security group rules (ACLs),
  meaning it doesn't allow any response traffic to pass through.

  For example, the following security group rule:
  {"direction": "ingress", "remote_ip_prefix": null, "protocol": "tcp", 
"port_range_max": 22,  "port_range_min": 22, "ethertype": "IPv4"}
  Allows tcp  inbound traffic through port 22, but since the Hyper-V agent does 
not add this rule as stateful, the reply traffic never received, unless 
specifically added an egress security group rule as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-hyperv/+bug/1362676/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361360] Re: Eventlet green threads not released back to the pool leading to choking of new requests

2015-11-14 Thread Alan Pevec
** Also affects: glance/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361360

Title:
  Eventlet green threads not released back to the pool leading to
  choking of new requests

Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  Fix Released
Status in Cinder juno series:
  Fix Released
Status in Glance:
  Fix Released
Status in Glance icehouse series:
  Fix Committed
Status in Glance juno series:
  New
Status in heat:
  Fix Released
Status in heat kilo series:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) juno series:
  Fix Committed
Status in OpenStack Identity (keystone) kilo series:
  Fix Released
Status in Manila:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron icehouse series:
  Fix Released
Status in neutron juno series:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix
Status in OpenStack Security Notes:
  Won't Fix
Status in Sahara:
  Fix Committed

Bug description:
  Currently reproduced  on Juno milestone 2. but this issue should be
  reproducible in all releases since its inception.

  It is possible to choke OpenStack API controller services using
  wsgi+eventlet library by simply not closing the client socket
  connection. Whenever a request is received by any OpenStack API
  service for example nova api service, eventlet library creates a green
  thread from the pool and starts processing the request. Even after the
  response is sent to the caller, the green thread is not returned back
  to the pool until the client socket connection is closed. This way,
  any malicious user can send many API requests to the API controller
  node and determine the wsgi pool size configured for the given service
  and then send those many requests to the service and after receiving
  the response, wait there infinitely doing nothing leading to
  disrupting services for other tenants. Even when service providers
  have enabled rate limiting feature, it is possible to choke the API
  services with a group (many tenants) attack.

  Following program illustrates choking of nova-api services (but this
  problem is omnipresent in all other OpenStack API Services using
  wsgi+eventlet)

  Note: I have explicitly set the wsi_default_pool_size default value to 10 in 
order to reproduce this problem in nova/wsgi.py.
  After you run the below program, you should try to invoke API
  

  import time
  import requests
  from multiprocessing import Process

  def request(number):
 #Port is important here
 path = 'http://127.0.0.1:8774/servers'
  try:
  response = requests.get(path)
  print "RESPONSE %s-%d" % (response.status_code, number)
  #during this sleep time, check if the client socket connection is 
released or not on the API controller node.
  time.sleep(1000)
  print “Thread %d complete" % number
  except requests.exceptions.RequestException as ex:
  print “Exception occurred %d-%s" % (number, str(ex))

  if __name__ == '__main__':
  processes = []
  for number in range(40):
  p = Process(target=request, args=(number,))
  p.start()
  processes.append(p)
  for p in processes:
  p.join()

  


  Presently, the wsgi server allows persist connections if you configure 
keepalive to True which is default.
  In order to close the client socket connection explicitly after the response 
is sent and read successfully by the client, you simply have to set keepalive 
to False when you create a wsgi server.

  Additional information: By default eventlet passes “Connection: keepalive” if 
keepalive is set to True when a response is sent to the client. But it doesn’t 
have capability to set the timeout and max parameter.
  For example.
  Keep-Alive: timeout=10, max=5

  Note: After we have disabled keepalive in all the OpenStack API
  service using wsgi library, then it might impact all existing
  applications built with the assumptions that OpenStack API services
  uses persistent connections. They might need to modify their
  applications if reconnection logic is not in place and also they might
  experience the performance has slowed down as it will need to
  reestablish the http connection for every request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1361360/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : 

[Yahoo-eng-team] [Bug 1382064] Re: Failure to allocate tunnel id when creating networks concurrently

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1382064

Title:
  Failure to allocate tunnel id when creating networks concurrently

Status in neutron:
  Fix Released
Status in neutron juno series:
  New
Status in neutron kilo series:
  New

Bug description:
  When multiple networks are created concurrently, the following trace
  is observed:

  WARNING neutron.plugins.ml2.drivers.helpers 
[req-34103ce8-b6d0-459b-9707-a24e369cf9de None] Allocate gre segment from pool 
failed after 10 failed attempts
  DEBUG neutron.context [req-2995f877-e3e6-4b32-bdae-da6295e492a1 None] 
Arguments dropped when creating context: {u'project_name': None, u'tenant': 
None} __init__ /usr/lib/python2.7/dist-packages/neutron/context.py:83
  DEBUG neutron.plugins.ml2.drivers.helpers 
[req-3541998d-44df-468f-b65b-36504e893dfb None] Allocate gre segment from pool, 
attempt 1 failed with segment {'gre_id': 300L} 
allocate_partially_specified_segment 
/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/helpers.py:138
  DEBUG neutron.context [req-6dcfb91d-2c5b-4e4f-9d81-55ba381ad232 None] 
Arguments dropped when creating context: {u'project_name': None, u'tenant': 
None} __init__ /usr/lib/python2.7/dist-packages/neutron/context.py:83
  ERROR neutron.api.v2.resource [req-34103ce8-b6d0-459b-9707-a24e369cf9de None] 
create failed
  TRACE neutron.api.v2.resource Traceback (most recent call last):
  TRACE neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 87, in 
resource
  TRACE neutron.api.v2.resource result = method(request=request, **args)
  TRACE neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 448, in create
  TRACE neutron.api.v2.resource obj = obj_creator(request.context, **kwargs)
  TRACE neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py", line 497, in 
create_network
  TRACE neutron.api.v2.resource tenant_id)
  TRACE neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py", line 160, 
in create_network_segments
  TRACE neutron.api.v2.resource segment = self.allocate_tenant_segment(session)
  TRACE neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py", line 189, 
in allocate_tenant_segment
  TRACE neutron.api.v2.resource segment = 
driver.obj.allocate_tenant_segment(session)
  TRACE neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/type_tunnel.py", 
line 115, in allocate_tenant_segment
  TRACE neutron.api.v2.resource alloc = 
self.allocate_partially_specified_segment(session)
  TRACE neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/helpers.py", line 
143, in allocate_partially_specified_segment
  TRACE neutron.api.v2.resource raise 
exc.NoNetworkFoundInMaximumAllowedAttempts()
  TRACE neutron.api.v2.resource NoNetworkFoundInMaximumAllowedAttempts: Unable 
to create the network. No available network found in maximum allowed attempts.
  TRACE neutron.api.v2.resource

  Additional conditions: multiserver deployment and mysql.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1382064/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367189] Re: multipath not working with Storwize backend if CHAP enabled

2015-11-14 Thread Alan Pevec
** Also affects: cinder/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367189

Title:
  multipath not working with Storwize backend if CHAP enabled

Status in Cinder:
  Fix Released
Status in Cinder juno series:
  New
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  New
Status in os-brick:
  Fix Released

Bug description:
  if I try to attach a volume to a VM while having multipath enabled in
  nova and CHAP enabled in the storwize backend, it fails:

  2014-09-09 11:37:14.038 22944 ERROR nova.virt.block_device 
[req-f271874a-9720-4779-96a8-01575641a939 a315717e20174b10a39db36b722325d6 
76d25b1928e7407392a69735a894c7fc] [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Driver failed to attach volume 
c460f8b7-0f1d-4657-bdf7-e142ad34a132 at /dev/vdb
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Traceback (most recent call last):
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/block_device.py", line 239, in 
attach
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] device_type=self['device_type'], 
encryption=encryption)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1235, in 
attach_volume
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] disk_info)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1194, in 
volume_driver_method
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] return method(connection_info, *args, 
**kwargs)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py", line 
249, in inner
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] return f(*args, **kwargs)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py", line 280, in 
connect_volume
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] check_exit_code=[0, 255])[0] \
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py", line 579, in 
_run_iscsiadm_bare
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] check_exit_code=check_exit_code)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/utils.py", line 165, in execute
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] return processutils.execute(*cmd, 
**kwargs)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py", line 
193, in execute
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] cmd=' '.join(cmd))
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] ProcessExecutionError: Unexpected error 
while running command.
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Command: sudo nova-rootwrap 
/etc/nova/rootwrap.conf iscsiadm -m discovery -t sendtargets -p 
192.168.1.252:3260
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Exit code: 5
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Stdout: ''
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Stderr: 'iscsiadm: Connection to 
Discovery Address 192.168.1.252 closed\niscsiadm: Login I/O error, failed to 
receive a PDU\niscsiadm: retrying discovery login to 192.168.1.252\niscsiadm: 
Connection to Discovery Address 192.168.1.252 

[Yahoo-eng-team] [Bug 1378558] Re: Plugin panel not listed in configured panel group

2015-11-14 Thread Alan Pevec
** Also affects: horizon/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1378558

Title:
  Plugin panel not listed in configured panel group

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  New
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  When adding panel Foo to the Admin dashboard's System panel group via
  the openstack_dashboard/local/enabled/ directory, with something like:

  PANEL = 'foo'
  PANEL_DASHBOARD = 'admin'
  PANEL_GROUP = 'admin'
  ADD_PANEL = 'openstack_dashboard.dashboards.admin.foo.panel.Foo'

  Foo appears under the panel group Other instead of System. This is the
  error in the Apache log:

  Could not process panel foo: 'tuple' object has no attribute 'append'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1378558/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367189] Re: multipath not working with Storwize backend if CHAP enabled

2015-11-14 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367189

Title:
  multipath not working with Storwize backend if CHAP enabled

Status in Cinder:
  Fix Released
Status in Cinder juno series:
  New
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  New
Status in os-brick:
  Fix Released

Bug description:
  if I try to attach a volume to a VM while having multipath enabled in
  nova and CHAP enabled in the storwize backend, it fails:

  2014-09-09 11:37:14.038 22944 ERROR nova.virt.block_device 
[req-f271874a-9720-4779-96a8-01575641a939 a315717e20174b10a39db36b722325d6 
76d25b1928e7407392a69735a894c7fc] [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Driver failed to attach volume 
c460f8b7-0f1d-4657-bdf7-e142ad34a132 at /dev/vdb
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Traceback (most recent call last):
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/block_device.py", line 239, in 
attach
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] device_type=self['device_type'], 
encryption=encryption)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1235, in 
attach_volume
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] disk_info)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1194, in 
volume_driver_method
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] return method(connection_info, *args, 
**kwargs)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py", line 
249, in inner
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] return f(*args, **kwargs)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py", line 280, in 
connect_volume
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] check_exit_code=[0, 255])[0] \
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py", line 579, in 
_run_iscsiadm_bare
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] check_exit_code=check_exit_code)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/utils.py", line 165, in execute
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] return processutils.execute(*cmd, 
**kwargs)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py", line 
193, in execute
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] cmd=' '.join(cmd))
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] ProcessExecutionError: Unexpected error 
while running command.
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Command: sudo nova-rootwrap 
/etc/nova/rootwrap.conf iscsiadm -m discovery -t sendtargets -p 
192.168.1.252:3260
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Exit code: 5
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Stdout: ''
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Stderr: 'iscsiadm: Connection to 
Discovery Address 192.168.1.252 closed\niscsiadm: Login I/O error, failed to 
receive a PDU\niscsiadm: retrying discovery login to 192.168.1.252\niscsiadm: 
Connection to Discovery Address 192.168.1.252 

[Yahoo-eng-team] [Bug 1313573] Re: nova backup fails to backup an instance with attached volume (libvirt, LVM backed)

2015-11-14 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1313573

Title:
  nova backup fails to backup an instance with attached volume (libvirt,
  LVM backed)

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  New

Bug description:
  Description of problem:
  An instance has an attached volume, after running the command:
  # nova backup   snapshot  
  An image has been created (type backup) and the status is stuck in 'queued'. 

  Version-Release number of selected component (if applicable):
  openstack-nova-compute-2013.2.3-6.el6ost.noarch
  openstack-nova-conductor-2013.2.3-6.el6ost.noarch
  openstack-nova-novncproxy-2013.2.3-6.el6ost.noarch
  openstack-nova-scheduler-2013.2.3-6.el6ost.noarch
  openstack-nova-api-2013.2.3-6.el6ost.noarch
  openstack-nova-cert-2013.2.3-6.el6ost.noarch

  python-glance-2013.2.3-2.el6ost.noarch
  python-glanceclient-0.12.0-2.el6ost.noarch
  openstack-glance-2013.2.3-2.el6ost.noarch

  How reproducible:
  100%

  Steps to Reproduce:
  1. launch an instance from a volume.
  2. backup the instance.

  
  Actual results:
  The backup is stuck in queued state.

  Expected results:
  the backup should be available as an image in Glance.

  Additional info:
  The nova-compute error & the glance logs are attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1313573/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296414] Re: quotas not updated when periodic tasks or startup finish deletes

2015-11-14 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296414

Title:
  quotas not updated when periodic tasks or startup finish deletes

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) juno series:
  New
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  There are a couple of cases in the compute manager where we don't pass
  reservations to _delete_instance().  For example, one of them is
  cleaning up when we see a delete that is stuck in DELETING.

  The only place we ever update quotas as part of delete should be when
  the instance DB record is removed. If something is stuck in DELETING,
  it means that the quota was not updated.  We should make sure we're
  always updating the quota when the instance DB record is removed.

  Soft delete kinda throws a wrench in this, though, because I think you
  want soft deleted instances to not count against quotas -- yet their
  DB records will still exist. In this case, it seems we may have a race
  condition in _delete_instance() -> _complete_deletion() where if the
  instance somehow was SOFT_DELETED, quotas would have updated twice
  (once in soft_delete and once in _complete_deletion).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1296414/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1305897] Re: Hyper-V driver failing with dynamic memory due to virtual NUMA

2015-11-14 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1305897

Title:
  Hyper-V driver failing with dynamic memory due to virtual NUMA

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  New

Bug description:
  Starting with Windows Server 2012, Hyper-V provides the Virtual NUMA
  functionality. This option is enabled by default in the VMs depending
  on the underlying hardware.

  However, it's not compatible with dynamic memory. The Hyper-V driver
  is not aware of this constraint and it's not possible to boot new VMs
  if the nova.conf parameter 'dynamic_memory_ratio' > 1.

  The error in the logs looks like the following:
  2014-04-09 16:33:43.615 18600 TRACE nova.virt.hyperv.vmops HyperVException: 
WMI job failed with status 10. Error details: Failed to modify device 'Memory'.
  2014-04-09 16:33:43.615 18600 TRACE nova.virt.hyperv.vmops
  2014-04-09 16:33:43.615 18600 TRACE nova.virt.hyperv.vmops Dynamic memory and 
virtual NUMA cannot be enabled on the same virtual machine. - 
'instance-0001c90c' failed to modify device 'Memory'. (Virtual machine ID 
F4CB4E4D-CA06-4149-9FA3-CAD2E0C6CEDA)
  2014-04-09 16:33:43.615 18600 TRACE nova.virt.hyperv.vmops
  2014-04-09 16:33:43.615 18600 TRACE nova.virt.hyperv.vmops Dynamic memory and 
virtual NUMA cannot be enabled on the virtual machine 'instance-0001c90c' 
because the features are mutually exclusive. (Virtual machine ID 
F4CB4E4D-CA06-4149-9FA3-CAD2E0C6CEDA) - Error code: 32773

  In order to solve this problem, it's required to change the field
  'VirtualNumaEnabled' in 'Msvm_VirtualSystemSettingData' (option
  available only in v2 namespace) while creating the VM when dynamic
  memory is used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1305897/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1327218] Re: Volume detach failure because of invalid bdm.connection_info

2015-11-14 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1327218

Title:
  Volume detach failure because of invalid bdm.connection_info

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  New
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Trusty:
  Confirmed

Bug description:
  Example of this here:

  http://logs.openstack.org/33/97233/1/check/check-grenade-
  dsvm/f7b8a11/logs/old/screen-n-cpu.txt.gz?level=TRACE#_2014-06-02_14_13_51_125

     File "/opt/stack/old/nova/nova/compute/manager.py", line 4153, in 
_detach_volume
   connection_info = jsonutils.loads(bdm.connection_info)
     File "/opt/stack/old/nova/nova/openstack/common/jsonutils.py", line 164, 
in loads
   return json.loads(s)
     File "/usr/lib/python2.7/json/__init__.py", line 326, in loads
   return _default_decoder.decode(s)
     File "/usr/lib/python2.7/json/decoder.py", line 366, in decode
   obj, end = self.raw_decode(s, idx=_w(s, 0).end())
   TypeError: expected string or buffer

  and this was in grenade with stable/icehouse nova commit 7431cb9

  There's nothing unusual about the test which triggers this - simply
  attaches a volume to an instance, waits for it to show up in the
  instance and then tries to detach it

  logstash query for this:

    message:"Exception during message handling" AND message:"expected
  string or buffer" AND message:"connection_info =
  jsonutils.loads(bdm.connection_info)" AND tags:"screen-n-cpu.txt"

  but it seems to be very rare

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1327218/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1293480] Re: Reboot host didn't restart instances due to libvirt lifecycle event change instance's power_stat as shutdown

2015-11-14 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1293480

Title:
  Reboot host  didn't restart instances due to  libvirt lifecycle event
  change instance's power_stat as shutdown

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  New

Bug description:
  1. Libvirt driver can receive libvirt lifecycle events(registered in
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L1004),
  then handle it in
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L969
  , that means  shutdown a domain  will  send out shutdown lifecycle
  event and nova compute will try to sync the instance's power_state.

  2. When reboot compute service ,  compute service is trying to reboot 
instance which were running before reboot.
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L911.  
Compute service only checks the power_state in database. the value of 
power_state can be changed in 3.  That leads out  reboot host, some instances 
which were running before reboot can't be restarted.

  3. When reboot the host,  the code path like  1)libvirt-guests will
  shutdown all the domain,   2)then sendout  lifecycle event , 3)nova
  compute receive it and 4)save power_state 'shutoff' in db , 5)then try
  to stop it.   Compute service may be killed in any step,  In my test
  enviroment,  two running instances , only one instance was restarted
  succefully. another was set power_state with 'shutoff', task_state
  with 'power off' in  step 4) .  So it can't pass the check in
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L911.
  won't be restarted.

  
  Not sure this is a bug ,  wonder if there is solution for this .

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1293480/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460741] Re: security groups iptables can block legitimate traffic as INVALID

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460741

Title:
  security groups iptables can block legitimate traffic as INVALID

Status in neutron:
  Fix Released
Status in neutron juno series:
  New
Status in neutron kilo series:
  Fix Released

Bug description:
  The iptables implementation of security groups includes a default rule
  to drop any INVALID packets (according to the Linux connection state
  tracking system.)  It looks like this:

  -A neutron-openvswi-od0518220-e -m state --state INVALID -j DROP

  This is placed near the top of the rule stack, before any security
  group rules added by the user.  See:

  
https://github.com/openstack/neutron/blob/stable/kilo/neutron/agent/linux/iptables_firewall.py#L495
  
https://github.com/openstack/neutron/blob/stable/kilo/neutron/agent/linux/iptables_firewall.py#L506-L510

  However, there are some cases where you would not want traffic marked
  as INVALID to be dropped here.  Specifically, our use case:

  We have a load balancing scheme where requests from the LB are
  tunneled as IP-in-IP encapsulation between the LB and the VM.
  Response traffic is configured for DSR, so the responses go directly
  out the default gateway of the VM.

  The results of this are iptables on the hypervisor does not see the
  initial SYN from the LB to VM (because it is encapsulated in IP-in-
  IP), and thus it does not make it into the connection table.  The
  response that comes out of the VM (not encapsulated) hits iptables on
  the hypervisor and is dropped as invalid.

  I'd like to see a Neutron option to enable/disable the population of
  this INVALID state rule, so that operators (such as us) can disable it
  if desired.  Obviously it's better in general to keep it in there to
  drop invalid packets, but there are cases where you would like to not
  do this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460741/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1457900] Re: dhcp_agents_per_network > 1 cause conflicts (NACKs) from dnsmasqs (break networks)

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1457900

Title:
  dhcp_agents_per_network > 1 cause conflicts (NACKs) from dnsmasqs
  (break networks)

Status in neutron:
  Fix Released
Status in neutron icehouse series:
  Fix Released
Status in neutron juno series:
  New
Status in neutron kilo series:
  New

Bug description:
  If neutron was configured to have more than one DHCP agent per network
  (option dhcp_agents_per_network=2), it causes dnsmasq to reject leases
  of others dnsmasqs, creating mess and stopping instances to boot
  normally.

  Symptoms:

  Cirros (at the log):
  Sending discover...
  Sending select for 188.42.216.146...
  Received DHCP NAK
  Usage: /sbin/cirros-dhcpc 
  Sending discover...
  Sending select for 188.42.216.146...
  Received DHCP NAK
  Usage: /sbin/cirros-dhcpc 
  Sending discover...
  Sending select for 188.42.216.146...
  Received DHCP NAK

  Steps to reproduce:
  1. Set up neutron with VLANs and dhcp_agents_per_network=2 option in 
neutron.conf
  2. Set up two or more different nodes with enabled neutron-dhcp-agent
  3. Create VLAN neutron network with --enable-dhcp option
  4. Create instance with that network

  Expected behaviour:

  Instance recieve IP address via DHCP without problems or delays.

  Actual behaviour:

  Instance stuck in the network boot for long time.
  There are complains about NACKs in the logs of dhcp client.
  There are multiple NACKs on tcpdump on interfaces

  Additional analysis: It is very complex, so I attach example of two
  parallel tcpdumps from two dhcp namespaces in HTML format.

  
  Version: 2014.2.3

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1457900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456823] Re: address pair rules not matched in iptables counter-preservation code

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1456823

Title:
  address pair rules not matched in iptables counter-preservation code

Status in neutron:
  Fix Released
Status in neutron juno series:
  New
Status in neutron kilo series:
  New

Bug description:
  There are a couple of issues with the way our iptables rules are
  formed that prevent them from being matched in the code that looks at
  existing rules to preserve counters. So the counters end up getting
  wiped out.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1456823/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454434] Re: NoNetworkFoundInMaximumAllowedAttempts during concurrent network creation

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1454434

Title:
  NoNetworkFoundInMaximumAllowedAttempts during concurrent network
  creation

Status in neutron:
  Fix Released
Status in neutron juno series:
  New
Status in neutron kilo series:
  New

Bug description:
  NoNetworkFoundInMaximumAllowedAttempts  could be thrown if networks are 
created by multiple threads simultaneously.
  This is related to https://bugs.launchpad.net/bugs/1382064
  Currently DB logic works correctly, however 11 attempts that code does right 
now might not be enough in some rare unlucky cases under extreme concurrency.

  We need to randomize segmentation_id selection to avoid such issues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1454434/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460220] Re: ipset functional tests assume system capability

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460220

Title:
  ipset functional tests assume system capability

Status in neutron:
  Fix Released
Status in neutron juno series:
  New
Status in neutron kilo series:
  New

Bug description:
  Production code uses ipset in the root namespace, but functional
  testing uses them in non-root namespaces. As it turns out, that
  functionality requires versions of the kernel and ipset not found in
  all versions of all distributions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460220/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456822] Re: AgentNotFoundByTypeHost exception logged when L3-agent starts up

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1456822

Title:
  AgentNotFoundByTypeHost exception logged when L3-agent starts up

Status in neutron:
  Fix Released
Status in neutron juno series:
  New
Status in neutron kilo series:
  New

Bug description:
  On my single-node devstack setup running the latest neutron code,
  there is one AgentNotFoundByTypeHost exception found for the L3-agent.
  However, the AgentNotFoundByTypeHost exception is not logged for the
  DHCP, OVS, or metadata agents.  This fact would point to a problem
  with how the L3-agent is starting up.

  Exception found in the L3-agent log:

  2015-05-19 11:27:57.490 23948 DEBUG oslo_messaging._drivers.amqpdriver [-] 
MSG_ID is 1d0f3e0a8a6744c9a9fc43eb3fdc5153 _send 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:311^M
  2015-05-19 11:27:57.550 23948 ERROR neutron.agent.l3.agent [-] Failed 
synchronizing routers due to RPC error^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent Traceback (most 
recent call last):^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/agent.py", line 517, in 
fetch_and_sync_all_routers^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent routers = 
self.plugin_rpc.get_routers(context)^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/agent.py", line 91, in get_routers^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent 
router_ids=router_ids)^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 
156, in call^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent 
retry=self.retry)^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, 
in _send^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent 
timeout=timeout, retry=retry)^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 350, in send^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent retry=retry)^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 341, in _send^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent raise result^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent RemoteError: 
Remote error: AgentNotFoundByTypeHost Agent with agent_type=L3 agent and 
host=DVR-Ctrl2 could not be found^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent [u'Traceback (most 
recent call last):\n', u'  File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply\nexecutor_callback))\n', u'  File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
186, in _dispatch\nexecutor_callback)\n', u'  File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
130, in _do_dispatch\nresult = func(ctxt, **new_args)\n', u'  File 
"/opt/stack/neutron/neutron/api/rpc/handlers/l3_rpc.py", line 81, in 
sync_routers\ncontext, host, router_ids))\n', u'  File 
"/opt/stack/neutron/neutron/db/l3_agentschedulers_db.py", line 290, in 
list_active_sync_routers_on_active_l3_agent\ncontext, 
constants.AGENT_TYPE_L3, host)\n', u'  File 
"/opt/stack/neutron/neutron/db/agents_db.py", line 197, in 
_get_agent_by_type_and_host\nhost=host)\n', u'AgentNotFoundByTypeHost: 
Agent with agent_ty
 pe=L3 agent and host=DVR-Ctrl2 could not be found\n'].^M

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1456822/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459467] Re: port update multiple fixed IPs anticipating allocation fails with mac address error

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1459467

Title:
  port update multiple fixed IPs anticipating allocation fails with mac
  address error

Status in neutron:
  Fix Released
Status in neutron icehouse series:
  Fix Released
Status in neutron juno series:
  New
Status in neutron kilo series:
  New

Bug description:
  A port update with multiple fixed IP specifications, one with a subnet
  ID and one with a fixed IP that conflicts with the address picked by
  the one specifying the subnet ID will result in a dbduplicate entry
  which is presented to the user as a mac address error.

  ~$ neutron port-update 7521786b-6c7f-4385-b5e1-fb9565552696 --fixed-ips 
type=dict 
{subnet_id=ca9dd2f0-cbaf-4997-9f59-dee9a39f6a7d,ip_address=42.42.42.42}
  Unable to complete operation for network 
0897a051-bf56-43c1-9083-3ac38ffef84e. The mac address None is in use.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1459467/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449260] Re: [OSSA 2015-009] Sanitation of metadata label (CVE-2015-3988)

2015-11-14 Thread Alan Pevec
** Also affects: horizon/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1449260

Title:
  [OSSA 2015-009] Sanitation of metadata label (CVE-2015-3988)

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  New
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  1) Start up Horizon
  2) Go to Images
  3) Next to an image, pick "Update Metadata"
  4) From the dropdown button, select "Update Metadata"
  5) In the Custom box, enter a value with some HTML like 
'alert(1)//', click +
  6) On the right-hand side, give it a value, like "ee"
  7) Click "Save"
  8) Pick "Update Metadata" for the image again, the page will fail to load, 
and the JavaScript console says:

  SyntaxError: invalid property id
  var existing_metadata = {"

  An alternative is if you change the URL to update_metadata for the
  image (for example,
  
http://192.168.122.239/admin/images/fa62ba27-e731-4ab9-8487-f31bac355b4c/update_metadata/),
  it will actually display the alert box and a bunch of junk.

  I'm not sure if update_metadata is actually a page, though... can't
  figure out how to get to it other than typing it in.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1449260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453074] Re: [OSSA 2015-010] help_text parameter of fields is vulnerable to arbitrary html injection (CVE-2015-3219)

2015-11-14 Thread Alan Pevec
** Also affects: horizon/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1453074

Title:
  [OSSA 2015-010] help_text parameter of fields is vulnerable to
  arbitrary html injection (CVE-2015-3219)

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  New
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  The Field class help_text attribute is vulnerable to code injection if
  the text is somehow taken from the user input.

  Heat UI allows to create stacks from the user input which define
  parameters. Those parameters are then converted to the input field
  which are vulnerable.

  The heat stack example exploit:

  description: Does not matter
  heat_template_version: '2013-05-23'
  outputs: {}
  parameters:
    param1:
  type: string
  label: normal_label
  description: hack=">alert('YOUR HORIZON IS PWNED')"
  resources: {}

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1453074/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453855] Re: HA routers may fail to send out GARPs when node boots

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453855

Title:
  HA routers may fail to send out GARPs when node boots

Status in neutron:
  Fix Released
Status in neutron juno series:
  New

Bug description:
  When a node boots, it starts the OVS and L3 agents. As an example, in
  RDO systemd unit files, these services have no dependency. This means
  that the L3 agent can start before the OVS agent. It can start
  configuring routers before the OVS agent finished syncing with the
  server and starts processing ovsdb monitor updates. The result is that
  when the L3 agent finishes configuring an HA router, it starts up
  keepalived, which under certain conditions will transition to master
  and send our gratuitous ARPs before the OVS agent finishes plugging
  its ports. This means that the gratuitous ARP will be lost, but with
  the router acting as master, this can cause black holes.

  Possible solutions:
  * Introduce systemd dependencies, but this has its set of intricacies and 
it's hard to solve the above problem comprehensively just with this approach.
  * Regardless, it's a good idea to use new keepalived flags:
  garp_master_repeat  how often the gratuitous ARP after MASTER state 
transition should be repeated?
  garp_master_refresh  Periodic delay in seconds sending gratuitous 
ARP while in MASTER state

  These will be configurable and have sane defaults.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453855/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464377] Re: Keystone v2.0 api accepts tokens deleted with v3 api

2015-11-14 Thread Alan Pevec
** Also affects: keystone/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1464377

Title:
  Keystone v2.0 api accepts tokens deleted with v3 api

Status in OpenStack Identity (keystone):
  Expired
Status in OpenStack Identity (keystone) juno series:
  New

Bug description:
  Keystone tokens that are deleted using the v3 api are still accepted by
  the v2 api. Steps to reproduce:

  1. Request a scoped token as a member of a tenant.
  2. Delete it using DELETE /v3/auth/tokens
  3. Request the tenants you can access with GET v2.0/tenants
  4. The token is accepted and keystone returns the list of tenants

  The token was a PKI token. Admin tokens appear to be deleted correctly.
  This could be a problem if a user's access needs to be revoked but they
  are still able to access v2 functions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1464377/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463363] Re: NSX-mh: Decimal RXTX factor not honoured

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463363

Title:
  NSX-mh: Decimal RXTX factor not honoured

Status in neutron:
  In Progress
Status in neutron juno series:
  New
Status in neutron kilo series:
  New
Status in vmware-nsx:
  Fix Committed

Bug description:
  A decimal RXTX factor, which is allowed by nova flavors, is not
  honoured by the NSX-mh plugin, but simply truncated to integer.

  To reproduce:

  * Create a neutron queue
  * Create a neutron net / subnet using the queue
  * Create a new flavor which uses an RXTX factor other than an integer value
  * Boot a VM on the net above using the flavor
  * View the NSX queue for the VM's VIF -- notice it does not have the RXTX 
factor applied correctly (for instance if it's 1.2 it does not multiply it at 
all, if it's 3.4 it applies a RXTX factor of 3)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463363/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447883] Re: Restrict netmask of CIDR to avoid DHCP resync is not enough

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1447883

Title:
  Restrict netmask of CIDR to avoid DHCP resync is not enough

Status in neutron:
  Fix Released
Status in neutron icehouse series:
  Fix Released
Status in neutron juno series:
  New
Status in neutron kilo series:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  Restrict netmask of CIDR to avoid DHCP resync  is not enough.
  https://bugs.launchpad.net/neutron/+bug/1443798

  I'd like to prevent following case:

  [Condition]
- Plugin: ML2
- subnet with "enable_dhcp" is True

  [Operations]
  A. Specify "[]"(empty list) at "allocation_pools" when create/update-subnet
  ---
  $ $ curl -X POST -d '{"subnet": {"name": "test_subnet", "cidr": 
"192.168.200.0/24", "ip_version": 4, "network_id": 
"649c5531-338e-42b5-a2d1-4d49140deb02", "allocation_pools": []}}' -H 
"x-auth-token:$TOKEN" -H "content-type:application/json" 
http://127.0.0.1:9696/v2.0/subnets

  Then, the dhcp-agent creates own DHCP-port, it is reproduced resync
  bug.

  B. Create port and exhaust allocation_pools
  ---
  1. Create subnet with 192.168.1.0/24. And, DHCP-port has alteady created.
 gateway_ip: 192.168.1.1
 DHCP-port: 192.168.1.2
 allocation_pools{"start": 192.168.1.2, "end": 192.168.1.254}
 the number of availability ip_addresses is 252.

  2. Create non-dhcp port and exhaust ip_addresses in allocation_pools
 In this case, user creates a port 252 times.
 the number of availability ip_addresses is 0.

  3. User deletes the DHCP-port(192.168.1.2)
 the number of availability ip_addresses is 1.

  4. User creates a non-dhcp port.
 the number of availability ports are 0.
 Then, dhcp-agent tries to create DHCP-port. It is reproduced resync bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1447883/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460562] Re: ipset can't be destroyed when last sg rule is deleted

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460562

Title:
  ipset can't be destroyed when last sg rule is deleted

Status in neutron:
  Fix Released
Status in neutron juno series:
  New
Status in neutron kilo series:
  Fix Released

Bug description:
  reproduce steps:
  1. a VM A in default security group
  2. default security group has rules: 1. allow all traffic out; 2. allow it 
self as remote_group in
  3. firstly delete rule 1, then delete rule2

  I found the iptables in compute node which VM A resids didn't be
  reload, and the relevant ipset didn't be destroyed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460562/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443186] Re: rebooted instances are shutdown by libvirt lifecycle event handling

2015-11-14 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1443186

Title:
  rebooted instances are shutdown by libvirt lifecycle event handling

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  New
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  This is a continuation of bug 1293480 (which created bug 1433049).
  Those were reported against xen domains with the libvirt driver but we
  have a recreate with CONF.libvirt.virt_type=kvm, see the attached logs
  and reference the instance with uuid
  9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78.

  In this case, we're running a stress test of soft rebooting 30 active
  instances at once.  Because of a delay in the libvirt lifecycle event
  handling, they are all shutdown after the reboot operation is complete
  and the instances go from ACTIVE to SHUTDOWN.

  This was reported to me against Icehouse code but the recreate is
  against Juno code with patch:

  https://review.openstack.org/#/c/169782/

  For better logging.

  Snippets from the log:

  2015-04-10 21:02:38.234 11195 AUDIT nova.compute.manager [req-
  b24d4f8d-4a10-44c8-81d7-f79f27e3a3e7 None] [instance:
  9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Rebooting instance

  2015-04-10 21:03:47.703 11195 DEBUG nova.compute.manager [req-
  8219e6cf-dce8-44e7-a5c1-bf1879e155b2 None] [instance:
  9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Received event network-vif-
  unplugged-0b2c7633-a5bc-4150-86b2-c8ba58ffa785 external_instance_event
  /usr/lib/python2.6/site-packages/nova/compute/manager.py:6285

  2015-04-10 21:03:49.299 11195 INFO nova.virt.libvirt.driver [req-
  b24d4f8d-4a10-44c8-81d7-f79f27e3a3e7 None] [instance:
  9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Instance shutdown successfully.

  2015-04-10 21:03:53.251 11195 DEBUG nova.compute.manager [req-
  521a6bdb-172f-4c0c-9bef-855087d7dff0 None] [instance:
  9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Received event network-vif-
  plugged-0b2c7633-a5bc-4150-86b2-c8ba58ffa785 external_instance_event
  /usr/lib/python2.6/site-packages/nova/compute/manager.py:6285

  2015-04-10 21:03:53.259 11195 INFO nova.virt.libvirt.driver [-]
  [instance: 9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Instance running
  successfully.

  2015-04-10 21:03:53.261 11195 INFO nova.virt.libvirt.driver [req-
  b24d4f8d-4a10-44c8-81d7-f79f27e3a3e7 None] [instance:
  9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Instance soft rebooted
  successfully.

  **
  At this point we have successfully soft rebooted the instance
  **

  now we get a lifecycle event from libvirt that the instance is
  stopped, since we're no longer running a task we assume the hypervisor
  is correct and we call the stop API

  2015-04-10 21:04:01.133 11195 DEBUG nova.virt.driver [-] Emitting event 
 
Stopped> emit_event /usr/lib/python2.6/site-packages/nova/virt/driver.py:1298
  2015-04-10 21:04:01.134 11195 INFO nova.compute.manager [-] [instance: 
9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] VM Stopped (Lifecycle Event)
  2015-04-10 21:04:01.245 11195 INFO nova.compute.manager [-] [instance: 
9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Synchronizing instance power state after 
lifecycle event "Stopped"; current vm_state: active, current task_state: None, 
current DB power_state: 1, VM power_state: 4
  2015-04-10 21:04:01.334 11195 INFO nova.compute.manager [-] [instance: 
9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] During _sync_instance_power_state the DB 
power_state (1) does not match the vm_power_state from the hypervisor (4). 
Updating power_state in the DB to match the hypervisor.
  2015-04-10 21:04:01.463 11195 WARNING nova.compute.manager [-] [instance: 
9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Instance shutdown by itself. Calling the 
stop API. Current vm_state: active, current task_state: None, original DB 
power_state: 1, current VM power_state: 4

  **
  now we get a lifecycle event from libvirt that the instance is started, but 
since the instance already has a task_state of 'powering-off' because of the 
previous stop API call from _sync_instance_power_state, we ignore it.
  **

  
  2015-04-10 21:04:02.085 11195 DEBUG nova.virt.driver [-] Emitting event 
 
Started> emit_event /usr/lib/python2.6/site-packages/nova/virt/driver.py:1298
  2015-04-10 21:04:02.086 11195 INFO nova.compute.manager [-] [instance: 
9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] VM Started (Lifecycle Event)
  2015-04-10 21:04:02.190 11195 INFO nova.compute.manager [-] [instance: 
9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Synchronizing instance power state after 
lifecycle event "Started"; current vm_state: active, current task_state: 
powering-off, current DB power_state: 4, VM power_state: 1
  2015-04-10 21:04:02.414 11195 INFO nova.compute.manager [-] [instance: 
9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] During 

[Yahoo-eng-team] [Bug 1431404] Re: Don't trace when @reverts_task_state fails on InstanceNotFound

2015-11-14 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1431404

Title:
  Don't trace when @reverts_task_state fails on InstanceNotFound

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  New

Bug description:
  This change https://review.openstack.org/#/c/163515/ added a warning
  when the @reverts_task_state decorator in the compute manager fails
  rather than just pass, because we were getting KeyErrors and never
  noticing them which broke the decorator.

  However, now we're tracing on InstanceNotFound which is a normal case
  if we're deleting the instance after a failure (tempest will delete
  the instance immediately after failures when tearing down a test):

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRmFpbGVkIHRvIHJldmVydCB0YXNrIHN0YXRlIGZvciBpbnN0YW5jZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI4NjQwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MjYxNzA3MDE2OTV9

  http://logs.openstack.org/98/163798/1/check/check-tempest-dsvm-
  postgres-
  full/6eff665/logs/screen-n-cpu.txt.gz#_2015-03-12_13_11_36_304

  2015-03-12 13:11:36.304 WARNING nova.compute.manager 
[req-a5f3b37e-19e9-4e1d-9be7-bbb9a8e7f4c1 DeleteServersTestJSON-706956764 
DeleteServersTestJSON-535578435] [instance: 
6de2ad51-3155-4538-830d-f02de39b4be3] Failed to revert task state for instance. 
Error: Instance 6de2ad51-3155-4538-830d-f02de39b4be3 could not be found.
  Traceback (most recent call last):

File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", 
line 142, in inner
  return func(*args, **kwargs)

File "/opt/stack/new/nova/nova/conductor/manager.py", line 134, in 
instance_update
  columns_to_join=['system_metadata'])

File "/opt/stack/new/nova/nova/db/api.py", line 774, in 
instance_update_and_get_original
  columns_to_join=columns_to_join)

File "/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 143, in wrapper
  return f(*args, **kwargs)

File "/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 2395, in 
instance_update_and_get_original
  columns_to_join=columns_to_join)

File "/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 181, in wrapped
  return f(*args, **kwargs)

File "/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 2434, in 
_instance_update
  columns_to_join=columns_to_join)

File "/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 1670, in 
_instance_get_by_uuid
  raise exception.InstanceNotFound(instance_id=uuid)

  InstanceNotFound: Instance 6de2ad51-3155-4538-830d-f02de39b4be3 could
  not be found.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1431404/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1433049] Re: libvirt-xen: Instance status in nova may be different than real status

2015-11-14 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1433049

Title:
  libvirt-xen: Instance status in nova may be different than real status

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  New

Bug description:
  Tempest test
  ServerActionsTestJSON:test_resize_server_confirm_from_stopped and
  other similaire test from_stopped may fail with libvirt-xen driver due
  to the test timing out on waiting the instance to be SHUTOFF, but nova
  is reporting the instance to be ACTIVE.

  Traceback (most recent call last):
File 
"/opt/stack/tempest/tempest/api/compute/servers/test_server_actions.py", line 
230, in test_resize_server_confirm_from_stopped
  self._test_resize_server_confirm(stop=True)
File 
"/opt/stack/tempest/tempest/api/compute/servers/test_server_actions.py", line 
209, in _test_resize_server_confirm
  self.client.wait_for_server_status(self.server_id, expected_status)
File "/opt/stack/tempest/tempest/services/compute/json/servers_client.py", 
line 183, in wait_for_server_status
  ready_wait=ready_wait)
File "/opt/stack/tempest/tempest/common/waiters.py", line 93, in 
wait_for_server_status
  raise exceptions.TimeoutException(message)
  tempest.exceptions.TimeoutException: Request timed out
  Details: (ServerActionsTestJSON:test_resize_server_confirm_from_stopped) 
Server a0f07187-4e08-4664-ad48-a03cffb87873 failed to reach SHUTOFF status and 
task state "None" within the required time (196 s). Current status: ACTIVE. 
Current task state: None.

  
  From nova log, I could see "VM Started (Lifecycle Event)" being reported 
while the instance is shutdown and being resized.

  After tracking done this bug, the issue may comes from the Change-Id
  I690d3d700ab4d057554350da143ff77d78b509c6, Delay STOPPED lifecycle
  event for Xen domains.

  A way to reproduce would be to run this script on a Xen machine using a small 
Cirros instance:
  nova boot --image 'cirros-0.3.2-x86_64-uec' --flavor 42 instance
  nova stop instance
  # wait sometime (around 20s) so we start with SHUTDOWN state
  nova start instance
  nova stop instance
  nova resize instance 84
  nova resize-confirm instance
  # check new state, should be shutoff.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1433049/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439302] Re: "FixedIpNotFoundForAddress: Fixed ip not found for address None." traces in gate runs

2015-11-14 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1439302

Title:
  "FixedIpNotFoundForAddress: Fixed ip not found for address None."
  traces in gate runs

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  New

Bug description:
  Seeing this quite a bit in normal gate runs:

  http://logs.openstack.org/53/169753/2/check/check-tempest-dsvm-full-
  ceph/07dcae0/logs/screen-n-cpu.txt.gz?level=TRACE#_2015-04-01_14_34_37_110

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRml4ZWRJcE5vdEZvdW5kRm9yQWRkcmVzczogRml4ZWQgaXAgbm90IGZvdW5kIGZvciBhZGRyZXNzIE5vbmUuXCIgQU5EIHRhZ3M6XCJzY3JlZW4tbi1jcHUudHh0XCIgQU5EIHRhZ3M6XCJtdWx0aWxpbmVcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQyNzkwMjQ0NTg4OSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d] FixedIpNotFoundForAddress: Fixed ip not 
found for address None.
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d] Traceback (most recent call last):
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d] executor_callback))
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
186, in _dispatch
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d] executor_callback)
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
130, in _do_dispatch
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d] result = func(ctxt, **new_args)
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]   File 
"/opt/stack/new/nova/nova/network/floating_ips.py", line 186, in 
deallocate_for_instance
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d] super(FloatingIP, 
self).deallocate_for_instance(context, **kwargs)
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]   File 
"/opt/stack/new/nova/nova/network/manager.py", line 558, in 
deallocate_for_instance
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d] instance=instance)
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]   File 
"/opt/stack/new/nova/nova/network/manager.py", line 214, in deallocate_fixed_ip
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d] context, address, 
expected_attrs=['network'])
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]   File 
"/opt/stack/new/nova/nova/objects/base.py", line 161, in wrapper
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d] args, kwargs)
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]   File 
"/opt/stack/new/nova/nova/conductor/rpcapi.py", line 329, in object_class_action
  

[Yahoo-eng-team] [Bug 1450682] Re: nova unit tests failing with pbr 0.11

2015-11-14 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1450682

Title:
  nova unit tests failing with pbr 0.11

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in OpenStack Compute (nova) juno series:
  New
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  test_version_string_with_package_is_good breaks with the release of
  pbr 0.11

  
nova.tests.unit.test_versions.VersionTestCase.test_version_string_with_package_is_good
  
--

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "nova/tests/unit/test_versions.py", line 33, in 
test_version_string_with_package_is_good
  version.version_string_with_package())
File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 350, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 435, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: '5.5.5.5-g9ec3421' != 
'2015.2.0-g9ec3421'

  
  
http://logs.openstack.org/27/169827/8/check/gate-nova-python27/2009c78/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1450682/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1429093] Re: nova allows to boot images with virtual size > root_gb specified in flavor

2015-11-14 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1429093

Title:
  nova allows to boot images with virtual size > root_gb specified in
  flavor

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  New
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  It's currently possible to boot an instance from a QCOW2 image, which
  has the virtual size larger than root_gb size specified in the given
  flavor.

  Steps to reproduce:

  1. Download a QCOW2 image (e.g. Cirros -
  https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-i386-disk.img)

  2. Resize the image to a reasonable size:

  qemu-img resize cirros-0.3.0-i386-disk.img +9G

  3. Upload the image to Glance:

  glance image-create --file cirros-0.3.0-i386-disk.img --name cirros-
  10GB --is-public True --progress --container-format bare --disk-format
  qcow2

  4. Boot the first VM using a 'correct' flavor (root_gb > virtual size
  of the Cirros image), e.g. m1.small (root_gb = 20)

  nova boot --image cirros-10GB --flavor m1.small demo-ok

  5. Wait until the VM boots.

  6. Boot the second VM using an 'incorrect' flavor (root_gb < virtual
  size of the Cirros image), e.g. m1.tiny (root_gb = 1):

  nova boot --image cirros-10GB --flavor m1.tiny demo-should-fail

  7. Wait until the VM boots.

  Expected result:

  demo-ok is in ACTIVE state
  demo-should-fail is in ERROR state (failed with FlavorDiskTooSmall)

  Actual result:

  demo-ok is in ACTIVE state
  demo-should-fail is in ACTIVE state

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1429093/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420032] Re: remove_router_interface doesn't scale well with dvr routers

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1420032

Title:
  remove_router_interface doesn't scale well with dvr routers

Status in neutron:
  Fix Released
Status in neutron juno series:
  New

Bug description:
  With dvr enabled , neutron remove-router-interface significantly
  degrades in response time as the number of l3_agents and the number of
  routers increases.   A significant contributor to the poor performance
  is due to check_ports_exist_on_l3agent.  The call to
  get_subnet_ids_on_router returns an empty list since the port has
  already been deleted by this point.  The empty subnet list is then
  used as a filter to the subsequent call core_plugin.get_ports which
  unexpectedly returns all ports instead of an empty list of ports.
  Erroneously looping through the entire list of ports is the biggest
  contributor to the poor scalability.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1420032/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427343] Re: missing entry point for cisco apic topology agent

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1427343

Title:
  missing entry point for cisco apic topology agent

Status in neutron:
  Fix Released
Status in neutron juno series:
  New

Bug description:
  Cisco APIC topology agent [0] is missing the entry point.

  
  [0] neutron.plugins.ml2.drivers.cisco.apic.apic_topology.ApicTopologyService

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1427343/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1430239] Re: Hyper-V: *DataRoot paths are not set for instances

2015-11-14 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1430239

Title:
  Hyper-V: *DataRoot paths are not set for instances

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  New

Bug description:
  The Nova Hyper-V Driver does not set the Data Root path locations for
  the newly created instances to the same location as the instances. By
  default. Hyper-V sets the location on C:\. This can cause issues for
  small C:\ partitions, as some of these files can be large.

  The path locations that needs to be set are: ConfigurationDataRoot,
  LogDataRoot, SnapshotDataRoot, SuspendDataRoot, SwapFileDataRoot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1430239/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1423772] Re: During live-migration Nova expects identical IQN from attached volume(s)

2015-11-14 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1423772

Title:
  During live-migration Nova expects identical IQN from attached
  volume(s)

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  New
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  When attempting to do a live-migration on an instance with one or more
  attached volumes, Nova expects that the IQN will be exactly the same
  as it's attaching the volume(s) to the new host. This conflicts with
  the Cinder settings such as "hp3par_iscsi_ips" which allows for
  multiple IPs for the purpose of load balancing.

  Example:
  An instance on Host A has a volume attached at 
"/dev/disk/by-path/ip-10.10.220.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2"
  An attempt is made to migrate the instance to Host B.
  Cinder sends the request to attach the volume to the new host.
  Cinder gives the new host 
"/dev/disk/by-path/ip-10.10.120.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2"
  Nova looks for the volume on the new host at the old location 
"/dev/disk/by-path/ip-10.10.220.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2"

  The following error appears in n-cpu in this case:

  2015-02-19 17:09:05.574 ERROR nova.virt.libvirt.driver [-] [instance: 
b6fa616f-4e78-42b1-a747-9d081a4701df] Live Migration failure: Failed to open 
file 
'/dev/disk/by-path/ip-10.10.220.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2':
 No such file or directory
  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/poll.py", line 
115, in wait
  listener.cb(fileno)
File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
212, in main
  result = function(*args, **kwargs)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5426, in 
_live_migration
  recover_method(context, instance, dest, block_migration)
File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in 
__exit__
  six.reraise(self.type_, self.value, self.tb)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5393, in 
_live_migration
  CONF.libvirt.live_migration_bandwidth)
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 183, 
in doit
  result = proxy_call(self._autowrap, f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 141, 
in proxy_call
  rv = execute(f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 122, 
in execute
  six.reraise(c, e, tb)
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 80, 
in tworker
  rv = meth(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/libvirt.py", line 1582, in 
migrateToURI2
  if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', 
dom=self)
  libvirtError: Failed to open file 
'/dev/disk/by-path/ip-10.10.220.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2':
 No such file or directory
  Removing descriptor: 3

  
  When looking at the nova DB, this is the state of block_device_mapping prior 
to the migration attempt:

  mysql> select * from block_device_mapping where 
instance_uuid='b6fa616f-4e78-42b1-a747-9d081a4701df' and deleted=0;
  
+-+-+++-+---+-+--+-+---+---+--+-+-+--+--+-+--++--+
  | created_at  | updated_at  | deleted_at | id | device_name | 
delete_on_termination | snapshot_id | volume_id| 
volume_size | no_device | connection_info   




 

[Yahoo-eng-team] [Bug 1439223] Re: misleading power state logging in _sync_instance_power_state

2015-11-14 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1439223

Title:
  misleading power state logging in _sync_instance_power_state

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  New

Bug description:
  Commit aa1792eb4c1d10e9a192142ce7e20d37871d916a added more verbose
  logging of the various database and hypervisor states when
  _sync_instance_power_state is called (which can be called from
  handle_lifecycle_event - triggered by the libvirt driver, or from the
  _sync_power_states periodic task).

  The current instance power_state from the DB's POV and the power state
  from the hypervisor's POV (via handle_lifecycle_event) can be
  different and if they are different, the database is updated with the
  power_state from the hypervisor and the local db_power_state variable
  is updated to be the same as the vm_power_state (from the hypervisor).

  Then later, the db_power_state value is used to log the different
  states when we have conditions like the database says an instance is
  running / active but the hypervisor says it's stopped, so we call
  compute_api.stop().

  We should be logging the original database power state and the
  power_state from the hypervisor to more accurately debug when we're
  out of sync.

  This is already fixed on master:
  https://review.openstack.org/#/c/159263/

  I'm reporting the bug so it this can be backported to stable/juno.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1439223/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450624] Re: Nova waits for events from neutron on resize-revert that aren't coming

2015-11-14 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1450624

Title:
  Nova waits for events from neutron on resize-revert that aren't coming

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  New
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  On resize-revert, the original host was waiting for plug events from
  neutron before restarting the instance. These aren't sent since we
  don't ever unplug the vifs. Thus, we'll always fail like this:

  
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 134, in _dispatch_and_reply
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 177, in _dispatch
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 123, in _do_dispatch
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/exception.py",
 line 88, in wrapped
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher payload)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
 line 82, in __exit__
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/exception.py",
 line 71, in wrapped
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 298, in decorated_function
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher pass
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
 line 82, in __exit__
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 284, in decorated_function
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 348, in decorated_function
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 326, in decorated_function
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
 line 82, in __exit__
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 314, in decorated_function
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 

[Yahoo-eng-team] [Bug 1414065] Re: Nova can lose track of running VM if live migration raises an exception

2015-11-14 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1414065

Title:
  Nova can lose track of running VM if live migration raises an
  exception

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  New

Bug description:
  There is a fairly serious bug in VM state handling during live
  migration, with a result that if libvirt raises an error *after* the
  VM has successfully live migrated to the target host, Nova can end up
  thinking the VM is shutoff everywhere, despite it still being active.
  The consequences of this are quite dire as the user can then manually
  start the VM again and corrupt any data in shared volumes and the
  like.

  The fun starts in the _live_migration method in
  nova.virt.libvirt.driver, if the 'migrateToURI2' method fails *after*
  the guest has completed migration.

  At start of migration, we see an event received by Nova for the new
  QEMU process starting on target host

  2015-01-23 15:39:57.743 DEBUG nova.compute.manager [-] [instance:
  12bac45e-aca8-40d1-8f39-941bc6bb59f0] Synchronizing instance power
  state after lifecycle event "Started"; current vm_state: active,
  current task_state: migrating, current DB power_state: 1, VM
  power_state: 1 from (pid=19494) handle_lifecycle_event
  /home/berrange/src/cloud/nova/nova/compute/manager.py:1134

  
  Upon migration completion we see CPUs start running on the target host

  2015-01-23 15:40:02.794 DEBUG nova.compute.manager [-] [instance:
  12bac45e-aca8-40d1-8f39-941bc6bb59f0] Synchronizing instance power
  state after lifecycle event "Resumed"; current vm_state: active,
  current task_state: migrating, current DB power_state: 1, VM
  power_state: 1 from (pid=19494) handle_lifecycle_event
  /home/berrange/src/cloud/nova/nova/compute/manager.py:1134

  And finally an event saying that the QEMU on the source host has
  stopped

  2015-01-23 15:40:03.629 DEBUG nova.compute.manager [-] [instance:
  12bac45e-aca8-40d1-8f39-941bc6bb59f0] Synchronizing instance power
  state after lifecycle event "Stopped"; current vm_state: active,
  current task_state: migrating, current DB power_state: 1, VM
  power_state: 4 from (pid=23081) handle_lifecycle_event
  /home/berrange/src/cloud/nova/nova/compute/manager.py:1134

  
  It is the last event that causes the trouble.  It causes Nova to mark the VM 
as shutoff at this point.

  Normally the '_live_migrate' method would succeed and so Nova would
  then immediately & explicitly mark the guest as running on the target
  host.   If an exception occurrs though, this explicit update of VM
  state doesn't happen so Nova considers the guest shutoff, even though
  it is still running :-(

  
  The lifecycle events from libvirt have an associated "reason", so we could 
see that the shutoff event from libvirt corresponds to a migration being 
completed, and so not mark the VM as shutoff in Nova.  We would also have to 
make sure the target host processes the 'resume' event upon migrate completion.

  An safer approach though, might be to just mark the VM as in an ERROR
  state if any exception occurs during migration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1414065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1417745] Re: Cells connecting pool tracking

2015-11-14 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1417745

Title:
  Cells connecting pool tracking

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  New

Bug description:
  Cells has a rpc driver for inter-cell communication.  A
  oslo.messaging.Transport is created for each inter-cell message.

  In previous versions of oslo.messaging, connection pool references
  were maintained within the RabbitMQ driver abstraction in
  oslo.messaging.  As of oslo.messaging commit
  f3370da11a867bae287d7f549a671811e8b399ef, the application must
  maintain a single reference to Transport or references to the
  connection pool will be lost.

  The net effect of this is that cells constructs a new broker
  connection pool  (and a connection) on every message sent between
  cells.  This is leaking references to connections.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1417745/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1516226] [NEW] Keystone V2 User API can access users outside of the default domain

2015-11-14 Thread Henry Nash
Public bug reported:

The Keystone V2 API is not mean to be able to "see" any user, groups or
projects outside of the default domain.  APIs that list these entities
are careful to filter out any that are in non-default-domains.  However,
if you know your entity ID we don't prevent you from doing direct lookup
-  i.e.. Get /users/ will work via the V2 API even if the user
is out side of the default domain.  The same is true for projects.
Since the V2 API does not have the concept of groups, there is no issue
in that case.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1516226

Title:
  Keystone V2 User API can access users outside of the default domain

Status in OpenStack Identity (keystone):
  New

Bug description:
  The Keystone V2 API is not mean to be able to "see" any user, groups
  or projects outside of the default domain.  APIs that list these
  entities are careful to filter out any that are in non-default-
  domains.  However, if you know your entity ID we don't prevent you
  from doing direct lookup -  i.e.. Get /users/ will work via
  the V2 API even if the user is out side of the default domain.  The
  same is true for projects.  Since the V2 API does not have the concept
  of groups, there is no issue in that case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1516226/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1498163] Re: [OSSA 2015-020] Glance storage quota bypass when token is expired (CVE-2015-5286)

2015-11-14 Thread Alan Pevec
** Also affects: glance/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1498163

Title:
  [OSSA 2015-020] Glance storage quota bypass when token is expired
  (CVE-2015-5286)

Status in Glance:
  Fix Released
Status in Glance juno series:
  New
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  About a year ago it was a vulnerability called 'Glance user storage quota 
bypass': https://security.openstack.org/ossa/OSSA-2015-003.html, where any user 
could overcome the quota and clog up the storage.
  The fix was proposed in master and all other stable branches, but it turned 
out, that it doesn't completely remove the issue and any user still can exceed 
the quota.

  It happens in case if user token is expired during file upload and
  when glance tries to update image status from 'saving' to 'active'.
  Then glance gets Unauthenticated exception from registry server and
  fails with 500 error. On the other side garbage file is left in
  storage.

  Steps to reproduce mostly coincide with the related from the previous bug, 
but in general it is:
  1. Set some value (like 1Gb) to user_storage_quota in glance-api.conf and 
restart the server.
  2. Make sure that your token will expire soon, when you'll be able to create 
an image instance in DB and begin the upload, but the token will expire during 
it.
  3. Create an image, begin the upload and quickly remove the image with 
'glance image-delete'.
  4. After the upload check that image is not in the list, i.e. it's deleted, 
and file is still located in the store.
  5. Perform steps 2-4 several times to make sure that user quota is exceeded.

  Related script (test_images.py from here
  https://bugs.launchpad.net/glance/+bug/1398830) works fine, too, but
  it's better to reduce token life time in keystone config to 1 or 2
  minutes, just for not to wait for one hour.

  Glance api v2 is affected as well, but only if registry db_api is
  enabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1498163/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501779] Re: Failing to delete an linux bridge causes log littering

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501779

Title:
  Failing to delete an linux bridge causes log littering

Status in neutron:
  Fix Released
Status in neutron juno series:
  New
Status in neutron kilo series:
  Fix Released

Bug description:
  I saw this in some ansible jobs in the gate:

  2015-09-30 22:37:21.805 26634 ERROR
  neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent
  [req-23466df3-f59e-4897-9a22-1abb7c99dfd9
  9a365636c1b44c41a9770a26ead28701 cbddab88045d45eeb3d2027a3e265b78 - -
  -] Cannot delete bridge brq33213e3f-2b, does not exist

  http://logs.openstack.org/57/227957/3/gate/gate-openstack-ansible-
  dsvm-commit/de3daa3/logs/aio1-neutron/neutron-linuxbridge-agent.log

  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py#L533

  That should not be an ERROR message, it could be INFO at best.  If
  you're racing with RPC and a thing is already gone, which you were
  going to delete anyway, it's not an error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501779/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501451] Re: Inconsistency in dhcp-agent when filling hosts and opts files

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501451

Title:
  Inconsistency in dhcp-agent when filling hosts and opts files

Status in neutron:
  Fix Committed
Status in neutron juno series:
  New

Bug description:
  We have bunch of subnets created in pre-Icehouse era, that have
  ipv6_address_mode and ipv6_ra_mode unset.  For dhcpv6 functionality we
  rely on enable_dhcp setting for a subnet.  However, in _iter_hosts
  port is skipped iff ipv6_address_mode set to SLAAC, but in
  _generate_opts_per_subnet subnet is skipped when ipv6_address_mode id
  SLAAC or unset.

  Since we can not update ipv6_address_mode attribute in existing
  subnets (allow_put is False), this breaks DHCPv6 for these VMs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501451/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416933] Re: Race condition in Ha router updating port status

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1416933

Title:
  Race condition in Ha router updating port status

Status in neutron:
  Fix Released
Status in neutron juno series:
  New
Status in neutron kilo series:
  Fix Released

Bug description:
  When L2 agent call 'get_devices_details_list', the ports in this l2 agent 
will firstly be updated to BUILD, then 'update_device_up' will update them to 
ACTIVE, but for a Ha router which has two l3 agents, there will be race 
condition.
  reproduce progress(not always happen, but much time):
  1.  'router-interface-add' add a subnet to Ha router
  2.  'router-gateway-set' set router gateway
  the gateway port status sometimes will always be BUILD

  in 'get_device_details', the port status will be update, but I think
  if a port status is ACTIVE and port['admin_state_up'] is True, this
  port should not be update,

  def get_device_details(self, rpc_context, **kwargs):
  ..
  ..
  new_status = (q_const.PORT_STATUS_BUILD if port['admin_state_up']
else q_const.PORT_STATUS_DOWN)
  if port['status'] != new_status:
  plugin.update_port_status(rpc_context,
port_id,
new_status,
host)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1416933/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1422504] Re: floating ip delete deadlock

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1422504

Title:
  floating ip delete deadlock

Status in neutron:
  Fix Released
Status in neutron juno series:
  New
Status in neutron kilo series:
  New

Bug description:
  rdo juno:

  2015-02-16 13:54:11.772 3612 ERROR neutron.api.v2.resource 
[req-5c6e13d3-56d6-476b-a961-e767aea637e5 None] delete failed
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/resource.py", line 87, in 
resource
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/base.py", line 476, in delete
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource 
obj_deleter(request.context, id, **kwargs)
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/db/l3_dvr_db.py", line 183, in 
delete_floatingip
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource 
self).delete_floatingip(context, id)
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/db/l3_db.py", line 1178, in 
delete_floatingip
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource router_id = 
self._delete_floatingip(context, id)
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/db/l3_db.py", line 840, in 
_delete_floatingip
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource 
l3_port_check=False)
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/plugin.py", line 984, in 
delete_port
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource port_db, 
binding = db.get_locked_port_and_binding(session, id)
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/db.py", line 141, in 
get_locked_port_and_binding
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource 
with_lockmode('update').
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2369, in one
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource ret = 
list(self)
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2411, in 
__iter__
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource 
self.session._autoflush()
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 1198, in 
_autoflush
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource self.flush()
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 1919, in 
flush
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource 
self._flush(objects)
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 2037, in 
_flush
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource 
transaction.rollback(_capture_exception=True)
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 60, 
in __exit__
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource 
compat.reraise(exc_type, exc_value, exc_tb)
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 2001, in 
_flush
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource 
flush_context.execute()
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 372, in 
execute
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource 
rec.execute(self)
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 555, in 
execute
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource uow
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 117, 
in delete_obj
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource 
cached_connections, mapper, table, delete)
  2015-02-16 13:54:11.772 

[Yahoo-eng-team] [Bug 1405049] Re: Can't see the router in the network topology page, if neutron l3 agent HA is enabled.

2015-11-14 Thread Alan Pevec
** Also affects: horizon/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1405049

Title:
  Can't see the router in the network topology page, if neutron l3 agent
  HA is enabled.

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  New

Bug description:
  When I enabled the neutron l3 agent ha, by setting the properties in
  neutron.conf, I create a router from horizon. But I can't see the
  router from the "Network Topology" page.

  But everything works fine, for example adding gateway, adding
  interface.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1405049/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398468] Re: Unable to terminate instance from Network Topology screen

2015-11-14 Thread Alan Pevec
** Also affects: horizon/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1398468

Title:
  Unable to terminate instance from Network Topology screen

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  New

Bug description:
  I get Server error in JS console and the following traceback in web
  server:

  Traceback (most recent call last):
File "/usr/lib/python2.7/wsgiref/handlers.py", line 85, in run
  self.result = application(self.environ, self.start_response)
File 
"/home/timur/develop/horizon/.venv/local/lib/python2.7/site-packages/django/contrib/staticfiles/handlers.py",
 line 67, in __call__
  return self.application(environ, start_response)
File 
"/home/timur/develop/horizon/.venv/local/lib/python2.7/site-packages/django/core/handlers/wsgi.py",
 line 206, in __call__
  response = self.get_response(request)
File 
"/home/timur/develop/horizon/.venv/local/lib/python2.7/site-packages/django/core/handlers/base.py",
 line 194, in get_response
  response = self.handle_uncaught_exception(request, resolver, 
sys.exc_info())
File 
"/home/timur/develop/horizon/.venv/local/lib/python2.7/site-packages/django/core/handlers/base.py",
 line 112, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/timur/develop/horizon/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
File "/home/timur/develop/horizon/horizon/decorators.py", line 52, in dec
  return view_func(request, *args, **kwargs)
File "/home/timur/develop/horizon/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
File "/home/timur/develop/horizon/horizon/decorators.py", line 84, in dec
  return view_func(request, *args, **kwargs)
File 
"/home/timur/develop/horizon/.venv/local/lib/python2.7/site-packages/django/views/generic/base.py",
 line 69, in view
  return self.dispatch(request, *args, **kwargs)
File 
"/home/timur/develop/horizon/.venv/local/lib/python2.7/site-packages/django/views/generic/base.py",
 line 87, in dispatch
  return handler(request, *args, **kwargs)
File "/home/timur/develop/horizon/horizon/tables/views.py", line 157, in get
  handled = self.construct_tables()
File "/home/timur/develop/horizon/horizon/tables/views.py", line 148, in 
construct_tables
  handled = self.handle_table(table)
File "/home/timur/develop/horizon/horizon/tables/views.py", line 120, in 
handle_table
  data = self._get_data_dict()
File "/home/timur/develop/horizon/horizon/tables/views.py", line 185, in 
_get_data_dict
  self._data = {self.table_class._meta.name: self.get_data()}
File 
"/home/timur/develop/horizon/openstack_dashboard/dashboards/project/instances/views.py",
 line 60, in get_data
  search_opts = self.get_filters({'marker': marker, 'paginate': True})
File 
"/home/timur/develop/horizon/openstack_dashboard/dashboards/project/instances/views.py",
 line 124, in get_filters
  filter_field = self.table.get_filter_field()
File "/home/timur/develop/horizon/horizon/tables/base.py", line 1239, in 
get_filter_field
  param_name = '%s_field' % filter_action.get_param_name()
  AttributeError: 'NoneType' object has no attribute 'get_param_name'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1398468/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1403068] Re: Tests fail with python 2.7.9

2015-11-14 Thread Alan Pevec
** Also affects: keystone/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1403068

Title:
  Tests fail with python 2.7.9

Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  Fix Released
Status in Cinder juno series:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) icehouse series:
  Fix Released
Status in OpenStack Identity (keystone) juno series:
  New
Status in Manila:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron icehouse series:
  Fix Released
Status in neutron juno series:
  Fix Released

Bug description:
  Tests that require SSL fail on python 2.7.9 due to the change in
  python uses SSL certificates.

  
  ==
  FAIL: cinder.tests.test_wsgi.TestWSGIServer.test_app_using_ipv6_and_ssl
  cinder.tests.test_wsgi.TestWSGIServer.test_app_using_ipv6_and_ssl
  --
  _StringException: Traceback (most recent call last):
File "/tmp/buildd/cinder-2014.2/cinder/tests/test_wsgi.py", line 237, in 
test_app_using_ipv6_and_ssl
  response = open_no_proxy('https://[::1]:%d/' % server.port)
File "/tmp/buildd/cinder-2014.2/cinder/tests/test_wsgi.py", line 47, in 
open_no_proxy
  return opener.open(*args, **kwargs)
File "/usr/lib/python2.7/urllib2.py", line 431, in open
  response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 449, in _open
  '_open', req)
File "/usr/lib/python2.7/urllib2.py", line 409, in _call_chain
  result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1240, in https_open
  context=self._context)
File "/usr/lib/python2.7/urllib2.py", line 1197, in do_open
  raise URLError(err)
  URLError: 
  Traceback (most recent call last):
  _StringException: Empty attachments:
stderr
stdout

  Traceback (most recent call last):
File "/tmp/buildd/cinder-2014.2/cinder/tests/test_wsgi.py", line 237, in 
test_app_using_ipv6_and_ssl
  response = open_no_proxy('https://[::1]:%d/' % server.port)
File "/tmp/buildd/cinder-2014.2/cinder/tests/test_wsgi.py", line 47, in 
open_no_proxy
  return opener.open(*args, **kwargs)
File "/usr/lib/python2.7/urllib2.py", line 431, in open
  response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 449, in _open
  '_open', req)
File "/usr/lib/python2.7/urllib2.py", line 409, in _call_chain
  result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1240, in https_open
  context=self._context)
File "/usr/lib/python2.7/urllib2.py", line 1197, in do_open
  raise URLError(err)
  URLError: 

  Traceback (most recent call last):
  _StringException: Empty attachments:
stderr
stdout

  Traceback (most recent call last):
File "/tmp/buildd/cinder-2014.2/cinder/tests/test_wsgi.py", line 237, in 
test_app_using_ipv6_and_ssl
  response = open_no_proxy('https://[::1]:%d/' % server.port)
File "/tmp/buildd/cinder-2014.2/cinder/tests/test_wsgi.py", line 47, in 
open_no_proxy
  return opener.open(*args, **kwargs)
File "/usr/lib/python2.7/urllib2.py", line 431, in open
  response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 449, in _open
  '_open', req)
File "/usr/lib/python2.7/urllib2.py", line 409, in _call_chain
  result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1240, in https_open
  context=self._context)
File "/usr/lib/python2.7/urllib2.py", line 1197, in do_open
  raise URLError(err)
  URLError: 

  
  ==
  FAIL: cinder.tests.test_wsgi.TestWSGIServer.test_app_using_ssl
  cinder.tests.test_wsgi.TestWSGIServer.test_app_using_ssl
  --
  _StringException: Traceback (most recent call last):
File "/tmp/buildd/cinder-2014.2/cinder/tests/test_wsgi.py", line 212, in 
test_app_using_ssl
  response = open_no_proxy('https://127.0.0.1:%d/' % server.port)
File "/tmp/buildd/cinder-2014.2/cinder/tests/test_wsgi.py", line 47, in 
open_no_proxy
  return opener.open(*args, **kwargs)
File "/usr/lib/python2.7/urllib2.py", line 431, in open
  response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 449, in _open
  '_open', req)
File "/usr/lib/python2.7/urllib2.py", line 409, in _call_chain
  result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1240, in https_open
  context=self._context)
File "/usr/lib/python2.7/urllib2.py", line 1197, in do_open
  raise URLError(err)
  URLError: 
  Traceback (most recent call last):
  _StringException: Empty 

[Yahoo-eng-team] [Bug 1411383] Re: Arista ML2 plugin incorrectly syncs with EOS

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1411383

Title:
  Arista ML2 plugin incorrectly syncs with EOS

Status in neutron:
  In Progress
Status in neutron juno series:
  New

Bug description:
  The Arista ML2 plugin periodically compares the data in the Neutron DB
  with EOS to ensure that they are in sync. If EOS reboots, then the
  data might be out of sync and the plugin needs to push data from
  Neutron DB to EOS. As an optimization, the plugin gets and stores the
  time at which the data on EOS was modified. Just before a sync, the
  plugin compares the stored time with the timestamp on EOS and performs
  the sync only if the timestamps differ.

  Due to a bug, the timestamp is incorrectly stored in the plugin
  because of which the sync never takes place and the only way to force
  a sync is to restart the neutron server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1411383/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475411] Re: During post_live_migration the nova libvirt driver assumes that the destination connection info is the same as the source, which is not always true

2015-11-14 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1475411

Title:
  During post_live_migration the nova libvirt driver assumes that the
  destination connection info is the same as the source, which is not
  always true

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  New
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  The post_live_migration step for Nova libvirt driver is currently
  making a bad assumption about the source and destination connector
  information. The destination connection info may be different from the
  source which ends up causing LUNs to be left dangling on the source as
  the BDM has overridden the connection info with that of the
  destination.

  Code section where this problem is occuring:

  
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L6036

  At line 6038 the potentially wrong connection info will be passed to
  _disconnect_volume which then ends up not finding the proper LUNs to
  remove (and potentially removes the LUNs for a different volume
  instead).

  By adding debug logging after line 6036 and then comparing that to the
  connection info of the source host (by making a call to Cinder's
  initialize_connection API) you can see that the connection info does
  not match:

  http://paste.openstack.org/show/TjBHyPhidRuLlrxuGktz/

  Version of nova being used:

  commit 35375133398d862a61334783c1e7a90b95f34cdb
  Merge: 83623dd b2c5542
  Author: Jenkins 
  Date:   Thu Jul 16 02:01:05 2015 +

  Merge "Port crypto to Python 3"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1475411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490581] Re: the items will never be deleted from metering_info

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1490581

Title:
  the items will never be deleted from metering_info

Status in neutron:
  Fix Released
Status in neutron juno series:
  New
Status in neutron kilo series:
  Fix Released

Bug description:
  The function _purge_metering_info of MeteringAgent class has a bug. The items 
of metering_info dictionary will never be deleted:
  if info['last_update'] > ts + report_interval:
  del self.metering_info[label_id]
  I this situation last_update will always be less than current timestamp.
  Also this function is not covered by the unit tests.
  Also again, the purge_metering_info function uses metering_info dict but it 
should use the metering_infos dict.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1490581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477253] Re: ovs arp_responder unsuccessfully inserts IPv6 address into arp table

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1477253

Title:
  ovs arp_responder unsuccessfully inserts IPv6 address into arp table

Status in neutron:
  Fix Released
Status in neutron juno series:
  New
Status in neutron kilo series:
  Fix Released

Bug description:
  The ml2 openvswitch arp_responder agent attempts to install IPv6
  addresses into the OVS arp response tables. The action obviously
  fails, reporting:

  ovs-ofctl: -:4: 2001:db8::x:x:x:x invalid IP address

  The end result is that the OVS br-tun arp tables are incomplete.

  The submitted patch verifies that the address is IPv4 before
  attempting to add the address to the table.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1477253/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473556] Re: Error log is generated when API operation is PolicyNotAuthorized and returns 404

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1473556

Title:
  Error log is generated when API operation is PolicyNotAuthorized and
  returns 404

Status in neutron:
  Fix Released
Status in neutron juno series:
  New

Bug description:
  neutron.policy module can raises webob.exc.HTTPNotFound when
  PolicyNotAuthorized is raised. In this case, neutron.api.resource
  outputs a log with error level. It should be INFO level as it occurs
  by user API requests.

  One of the easiest way is to reproduce this bug is as follows:

  (1) create a shared network by admin user
  (2) try to delete the shared network by regular user

  (A regular user can know a ID of the shared network, so the user can
  request to delete the shared network.)

  As a result we get the following log.
  It is confusing from the point of log monitoring.

  2015-07-11 05:28:33.914 DEBUG neutron.policy 
[req-5aef6df6-1fb7-4187-9980-4e41fc648ad7 demo 
1e942c3c210b42ff8c45f42962da33b4] Enforcing rules: ['delete_network', 
'delete_network:provider:physical_network
  ', 'delete_network:shared', 'delete_network:provider:network_type', 
'delete_network:provider:segmentation_id'] from (pid=1439) log_rule_list 
/opt/stack/neutron/neutron/policy.py:319
  2015-07-11 05:28:33.914 DEBUG neutron.policy 
[req-5aef6df6-1fb7-4187-9980-4e41fc648ad7 demo 
1e942c3c210b42ff8c45f42962da33b4] Failed policy check for 'delete_network' from 
(pid=1439) enforce /opt/stack/n
  eutron/neutron/policy.py:393
  2015-07-11 05:28:33.914 ERROR neutron.api.v2.resource 
[req-5aef6df6-1fb7-4187-9980-4e41fc648ad7 demo 
1e942c3c210b42ff8c45f42962da33b4] delete failed
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 146, in wrapper
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 119, in 
__exit__
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 136, in wrapper
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource return f(*args, 
**kwargs)
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 495, in delete
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource raise 
webob.exc.HTTPNotFound(msg)
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource HTTPNotFound: The 
resource could not be found.
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1473556/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482371] Re: [OSSA 2015-019] Image status can be changed by passing header 'x-image-meta-status' with PUT operation using v1 (CVE-2015-5251)

2015-11-14 Thread Alan Pevec
** Also affects: glance/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1482371

Title:
  [OSSA 2015-019] Image status can be changed by passing header 'x
  -image-meta-status' with PUT operation using v1 (CVE-2015-5251)

Status in Glance:
  Fix Released
Status in Glance juno series:
  New
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  Using Glance v1, one is able to change the status of an image to any
  one of the valid statuses by passing the header 'x-image-meta-status'
  with PUT on /images/.  This bug provides a way for an image
  to transition states that are otherwise not possible in an image's
  lifecycle.

  See http://paste.openstack.org/show/pNL7kvIZUz7cWJQwX64d/ for a
  reproduction of this behavior on devstack.

  As shown in the above paste, though one is able to change the status
  of an active image to queued, uploading data after re-setting the
  status to queued fails with a 400[1].  Though the purpose of [1]
  appears to be slightly different, it's fortunately saving us from
  badly breaking the immutability guarantees of glance images.

  [1]
  
https://github.com/openstack/glance/blob/master/glance/api/v1/images.py#L760-L765

  NOTE: Marking this as a security vulnerability for now as users would
  be able to activate the deactivated images on their own. This probably
  affects deployments only where v1 is exposed publicly. However, it's
  probably worth discussing this from a security perspective as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1482371/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1481692] Re: Neutron usage_audit's router and floating IP reporting doesn't work with ML2 plugin

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1481692

Title:
  Neutron usage_audit's router and floating IP reporting doesn't work
  with ML2 plugin

Status in neutron:
  Fix Released
Status in neutron juno series:
  New

Bug description:
  Neutron usage_audit's router and floating IP reporting doesn't work
  with ML2 plugin as router functionality has been moved to L3 plugin.

  The bug has been noted earlier
  http://lists.openstack.org/pipermail/openstack/2014-September/009371.html
  but I couldn't find a bug report from launchpad.

  The error in neutron-usage-audit.log looks like this
  2015-08-05 12:00:04.295 30126 CRITICAL neutron 
[req-74df5d30-7070-4152-86d3-cc4e2ef4fefa None] 'Ml2Plugin' object has no 
attribute 'get_routers'
  2015-08-05 12:00:04.295 30126 TRACE neutron Traceback (most recent call last):
  2015-08-05 12:00:04.295 30126 TRACE neutron   File 
"/usr/bin/neutron-usage-audit", line 10, in 
  2015-08-05 12:00:04.295 30126 TRACE neutron sys.exit(main())
  2015-08-05 12:00:04.295 30126 TRACE neutron   File 
"/usr/lib/python2.6/site-packages/neutron/cmd/usage_audit.py", line 55, in main
  2015-08-05 12:00:04.295 30126 TRACE neutron for router in 
plugin.get_routers(cxt):
  2015-08-05 12:00:04.295 30126 TRACE neutron AttributeError: 'Ml2Plugin' 
object has no attribute 'get_routers'
  2015-08-05 12:00:04.295 30126 TRACE neutron 

  I found the bug on icehouse but the relevant code is  same in HEAD. My
  plan is to submit a patch to fix the bug, the fix is quite trivial.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1481692/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489111] Re: [OSSA 2015-018] IP, MAC, and DHCP spoofing rules can by bypassed by changing device_owner (CVE-2015-5240)

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489111

Title:
  [OSSA 2015-018] IP, MAC, and DHCP spoofing rules can by bypassed by
  changing device_owner (CVE-2015-5240)

Status in neutron:
  Fix Released
Status in neutron juno series:
  New
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  This issue is being treated as a potential security risk under
  embargo. Please do not make any public mention of embargoed (private)
  security vulnerabilities before their coordinated publication by the
  OpenStack Vulnerability Management Team in the form of an official
  OpenStack Security Advisory. This includes discussion of the bug or
  associated fixes in public forums such as mailing lists, code review
  systems and bug trackers. Please also avoid private disclosure to
  other individuals not already approved for access to this information,
  and provide this same reminder to those who are made aware of the
  issue prior to publication. All discussion should remain confined to
  this private bug report, and any proposed fixes should be added to the
  bug as attachments.

  --

  The anti-IP spoofing rules, anti-MAC spoofing rules, and anti-DHCP
  spoofing rules can be bypassed by changing the device_owner field of a
  compute node's port to something that starts with 'network:'.

  Steps to reproduce:

  Create a port on the target network:

  neutron port-create some_network

  Start a repeated update of the device_owner field to immediately
  change it back after nova sets it to 'compute:' on VM
  attachment. (This has to be done quickly because the owner has to be
  set to 'network:something' before the L2 agent wires up the security
  group rules.)

  watch neutron port-update  --device-owner
  network:hello

  Then boot the VM with the port UUID:

  nova boot test --nic port-id= --flavor m1.tiny
  --image cirros-0.3.4-x86_64-uec

  This VM will now have no iptables rules applied because it will be
  treated as a network owned port (e.g. router interface, DHCP
  interface, etc).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489111/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1329050] Re: Creating a PanelGroup produces group with name "Other"

2015-11-14 Thread Alan Pevec
** Also affects: horizon/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1329050

Title:
  Creating a PanelGroup produces group with name "Other"

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  New

Bug description:
  Creating a PanelGroup that is made up of one or more panels that exist
  in a subdirectory does not yield a group with the correct name.
  Instead, it gives a Panel Group with a display name of "Other".

  
  Example code:
  In project/dashboard.py, I added a new PanelGroup

  class DataProcessingPanels(horizon.PanelGroup):
  name = _("Data Processing")
  slug = "data_processing"
  panels = ("data_processing.plugins",)

  and added it to the panels for the "Project" dashboard.

  The code for the "data_processing.plugins" panel is in
  .dashboards/project/data_processing/plugins

  This results in a Panel Group showing up in the UI with a display name
  of "Other".  The "plugins" panel is correctly listed in there and it
  seems to have full functionality.  The only bit that looks to be
  broken is the display name.

  
  As a part of my debugging effort, I changed the panel from 
"data_processing.plugins" to another panel that is not in a subdirectory and 
then the "Data Processing" group name displayed correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1329050/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328546] Re: Race condition when hard rebooting instance

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1328546

Title:
  Race condition when hard rebooting instance

Status in neutron:
  Fix Released
Status in neutron juno series:
  New

Bug description:
  Condition for this to happen:
  ==

  1. Agent: neutron-linuxbridge-agent.
  2. Only 1 instance is running  on the hypervisor that belong to this network.
  3. Timing, it's a race condition after all ;-)

  Remarked behavior:
  

  After hard reboot instance end up in ERROR state and the nova-compute
  log an error saying that:

  Cannot get interface MTU on 'brqf9d0e8cf-bd': No such device

  What happen:
  ===

  When nova do a hard reboot, the instance is first destroyed,  which
  imply that the tap device is deleted from the linux bridge (which
  result to an empty bridge b/c of 2 condition above), than re-created
  afterward, but in between neutron-linuxbridge-agent may clean up this
  empty bridge as part of his remove_empty_bridges()[1], but for this
  error to happen neutron-linuxbridge-agent should do that after
  plug_vifs()[2] and before domain.createWithFlags() finish.

  [1]: 
https://github.com/openstack/neutron/blob/stable/icehouse/neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py#L449.
  [2]: 
https://github.com/openstack/nova/blob/stable/icehouse/nova/virt/libvirt/driver.py#L3648-3656

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1328546/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332917] Re: Deadlock when deleting from ipavailabilityranges

2015-11-14 Thread Alan Pevec
** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332917

Title:
  Deadlock when deleting from ipavailabilityranges

Status in neutron:
  Fix Released
Status in neutron juno series:
  New

Bug description:
  Traceback:
   TRACE neutron.api.v2.resource Traceback (most recent call last):
   TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/resource.py", line 87, in resource
   TRACE neutron.api.v2.resource result = method(request=request, **args)
   TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 477, in delete
   TRACE neutron.api.v2.resource obj_deleter(request.context, id, **kwargs)
   TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py", line 608, in 
delete_subnet
   TRACE neutron.api.v2.resource break
   TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 463, 
in __exit__
   TRACE neutron.api.v2.resource self.rollback()
   TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", line 
57, in __exit__
   TRACE neutron.api.v2.resource compat.reraise(exc_type, exc_value, exc_tb)
   TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 460, 
in __exit__
   TRACE neutron.api.v2.resource self.commit()
   TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 370, 
in commit
   TRACE neutron.api.v2.resource self._prepare_impl()
   TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 350, 
in _prepare_impl
   TRACE neutron.api.v2.resource self.session.flush()
   TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/openstack/common/db/sqlalchemy/session.py", 
line 444, in _wrap
   TRACE neutron.api.v2.resource _raise_if_deadlock_error(e, 
self.bind.dialect.name)
   TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/openstack/common/db/sqlalchemy/session.py", 
line 427, in _raise_if_deadlock_error
   TRACE neutron.api.v2.resource raise 
exception.DBDeadlock(operational_error)
   TRACE neutron.api.v2.resource DBDeadlock: (OperationalError) (1213, 
'Deadlock found when trying to get lock; try restarting transaction') 'DELETE 
FROM ipavailabilityranges WHERE ipavailabilityranges.allocation_pool_id = %s 
AND ipavailabilityranges.first_ip = %s AND ipavailabilityranges.last_ip = %s' 
('b19b08b6-90f2-43d6-bfe1-9cbe6e0e1d93', '10.100.0.2', '10.100.0.14')

  http://logs.openstack.org/21/76021/12/check/check-tempest-dsvm-
  neutron-
  full/7577c27/logs/screen-q-svc.txt.gz?level=TRACE#_2014-06-21_18_39_47_122

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332917/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1323658] Re: Nova resize/restart results in guest ending up in inconsistent state with Neutron

2015-11-14 Thread Dariusz Smigiel
Retested this with latest devstack configuration. Based on description
from other bug (duplicate of this:
https://bugs.launchpad.net/nova/+bug/1364588) I'm not able to reproduce
this problem.

Resized about 20 times on running and disabled servers. All the time, 
everything is OK. There are no problems with "ERROR" state or losing 
connectivity.
Logstash doesn't show any similar problems with this issue.

Closing as works-for-me, fix probably was already released.

If anyone has another experiences, please reopen with additional info.

** Changed in: neutron
   Status: Confirmed => Fix Released

** Changed in: nova
 Assignee: (unassigned) => Dariusz Smigiel (smigiel-dariusz)

** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1323658

Title:
  Nova resize/restart results in guest ending up in inconsistent state
  with Neutron

Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Was looking at this when investigating bug 1310852, so that might be a
  duplicate of this, but the "Public network connectivity check failed"
  message doesn't show up in the logs for that bug, so opening a new
  one.

  This is also maybe related to or regressions of bug 1194026 and/or bug
  1253896.

  The error in the console log:

  2014-05-27 13:34:49.707 | 2014-05-27 13:33:24,369 Creating ssh connection 
to '172.24.4.110' as 'cirros' with public key authentication
  2014-05-27 13:34:49.707 | 2014-05-27 13:33:24,491 Failed to establish 
authenticated ssh connection to cirros@172.24.4.110 ([Errno 111] Connection 
refused). Number attempts: 1. Retry after 2 seconds.
  2014-05-27 13:34:49.707 | 2014-05-27 13:33:27,162 Failed to establish 
authenticated ssh connection to cirros@172.24.4.110 ([Errno 111] Connection 
refused). Number attempts: 2. Retry after 3 seconds.
  2014-05-27 13:34:49.707 | 2014-05-27 13:33:32,049 starting thread (client 
mode): 0x9e9cf10L
  2014-05-27 13:34:49.707 | 2014-05-27 13:33:32,050 EOF in transport thread
  2014-05-27 13:34:49.707 | 2014-05-27 13:33:32,051 Public network 
connectivity check failed
  2014-05-27 13:34:49.707 | 2014-05-27 13:33:32.051 10354 TRACE 
tempest.scenario.test_network_advanced_server_ops Traceback (most recent call 
last):
  2014-05-27 13:34:49.707 | 2014-05-27 13:33:32.051 10354 TRACE 
tempest.scenario.test_network_advanced_server_ops   File 
"tempest/scenario/test_network_advanced_server_ops.py", line 119, in 
_check_public_network_connectivity
  2014-05-27 13:34:49.707 | 2014-05-27 13:33:32.051 10354 TRACE 
tempest.scenario.test_network_advanced_server_ops 
should_connect=should_connect)
  2014-05-27 13:34:49.708 | 2014-05-27 13:33:32.051 10354 TRACE 
tempest.scenario.test_network_advanced_server_ops   File 
"tempest/scenario/manager.py", line 779, in _check_vm_connectivity
  2014-05-27 13:34:49.708 | 2014-05-27 13:33:32.051 10354 TRACE 
tempest.scenario.test_network_advanced_server_ops 
linux_client.validate_authentication()
  2014-05-27 13:34:49.708 | 2014-05-27 13:33:32.051 10354 TRACE 
tempest.scenario.test_network_advanced_server_ops   File 
"tempest/common/utils/linux/remote_client.py", line 53, in 
validate_authentication
  2014-05-27 13:34:49.708 | 2014-05-27 13:33:32.051 10354 TRACE 
tempest.scenario.test_network_advanced_server_ops 
self.ssh_client.test_connection_auth()
  2014-05-27 13:34:49.708 | 2014-05-27 13:33:32.051 10354 TRACE 
tempest.scenario.test_network_advanced_server_ops   File 
"tempest/common/ssh.py", line 150, in test_connection_auth
  2014-05-27 13:34:49.708 | 2014-05-27 13:33:32.051 10354 TRACE 
tempest.scenario.test_network_advanced_server_ops connection = 
self._get_ssh_connection()
  2014-05-27 13:34:49.708 | 2014-05-27 13:33:32.051 10354 TRACE 
tempest.scenario.test_network_advanced_server_ops   File 
"tempest/common/ssh.py", line 75, in _get_ssh_connection
  2014-05-27 13:34:49.708 | 2014-05-27 13:33:32.051 10354 TRACE 
tempest.scenario.test_network_advanced_server_ops 
timeout=self.channel_timeout, pkey=self.pkey)
  2014-05-27 13:34:49.708 | 2014-05-27 13:33:32.051 10354 TRACE 
tempest.scenario.test_network_advanced_server_ops   File 
"/usr/local/lib/python2.7/dist-packages/paramiko/client.py", line 242, in 
connect
  2014-05-27 13:34:49.709 | 2014-05-27 13:33:32.051 10354 TRACE 
tempest.scenario.test_network_advanced_server_ops t.start_client()
  2014-05-27 13:34:49.709 | 2014-05-27 13:33:32.051 10354 TRACE 
tempest.scenario.test_network_advanced_server_ops   File 
"/usr/local/lib/python2.7/dist-packages/paramiko/transport.py", line 346, in 
start_client
  2014-05-27 13:34:49.709 | 2014-05-27 13:33:32.051 10354 TRACE 
tempest.scenario.test_network_advanced_server_ops raise e
  2014-05-27 13:34:49.709 | 

[Yahoo-eng-team] [Bug 1466547] Re: Hyper-V: Cannot add ICMPv6 security group rule

2015-11-14 Thread Alan Pevec
** Also affects: neutron
   Importance: Undecided
   Status: New

** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New => Invalid

** Changed in: neutron/juno
   Status: New => Fix Committed

** Changed in: neutron/juno
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1466547

Title:
  Hyper-V: Cannot add ICMPv6 security group rule

Status in networking-hyperv:
  Fix Committed
Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Committed

Bug description:
  Security Group rules created with ethertype 'IPv6' and protocol 'icmp'
  cannot be added by the Hyper-V Security Groups Driver, as it cannot
  add rules with the protocol 'icmpv6'.

  This can be easily fixed by having the Hyper-V Security Groups Driver
  create rules with protocol '58' instead. [1] These rules will also
  have to be stateless, as ICMP rules cannot be stateful on Hyper-V.

  This bug is causing the test
  tempest.scenario.test_network_v6.TestGettingAddress.test_slaac_from_os
  to fail on Hyper-V.

  [1] http://www.iana.org/assignments/protocol-numbers/protocol-
  numbers.xhtml

  Log: http://paste.openstack.org/show/301866/

  Security Groups: http://paste.openstack.org/show/301870/

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-hyperv/+bug/1466547/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461406] Re: libvirt: missing iotune parse for LibvirtConfigGuestDisk

2015-11-14 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1461406

Title:
  libvirt: missing  iotune parse for  LibvirtConfigGuestDisk

Status in OpenStack Compute (nova):
  Expired

Bug description:
  We support  instance disk IO control with  iotune like :


  102400


  we set iotune in class LibvirtConfigGuestDisk  in libvirt/config.py . The 
method parse_dom doesn't parse iotue options now.
  Need fix that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1461406/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492759] Re: heat-engine refers to a non-existent novaclient's method

2015-11-14 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1492759

Title:
  heat-engine refers to a non-existent novaclient's method

Status in heat:
  Invalid
Status in OpenStack Compute (nova):
  Expired

Bug description:
  Openstack Kilo on Centos 7

  I cannot create a stack. heat-engine failed regardless of template what used 
for.
   
  Error message: ERROR: Property error: : resources.pgpool.properties.flavor: : 
'OpenStackComputeShell' object has no attribute '_discover_extensions

  heat-engine log:
  
  2015-09-06 15:34:08.242 19788 DEBUG oslo_messaging._drivers.amqp [-] unpacked 
context: {u'username': None, u'user_id': u'665b2e5b102a413c90433933aade392b', 
u'region_name': None, u'roles': [u'user', u'heat_stack_owner'], 
u'user_identity': u'- daddy', u'tenant_id': 
u'b408e8f5cb56432a96767c83583ea051', u'auth_token': u'***', u'auth_token_info': 
{u'token': {u'methods': [u'password'], u'roles': [{u'id': 
u'0698f895b3544a20ac511c6e287691d4', u'name': u'user'}, {u'id': 
u'2061bd7e4e9d4da4a3dc2afff69a823e', u'name': u'heat_stack_owner'}], 
u'expires_at': u'2015-09-06T14:34:08.136737Z', u'project': {u'domain': {u'id': 
u'default', u'name': u'Default'}, u'id': u'b408e8f5cb56432a96767c83583ea051', 
u'name': u'daddy'}, u'catalog': [{u'endpoints': [{u'url': 
u'http://172.17.1.1:9292', u'interface': u'admin', u'region': u'CEURegion', 
u'region_id': u'CEURegion', u'id': u'5dce804bafb34b159ec1b4385460a481'}, 
{u'url': u'http://172.17.1.1:9292', u'interface': u'public', u'region': 
u'CEURegion', u'region_id
 ': u'CEURegion', u'id': u'a5728528ead84649bd561f9841011ff4'}, {u'url': 
u'http://172.17.1.1:9292', u'interface': u'internal', u'region': u'CEURegion', 
u'region_id': u'CEURegion', u'id': u'e205b5ba78e0479fb391d90f4958a8a0'}], 
u'type': u'image', u'id': u'0a0dd8432bd64f88b2c1ffd3d5d23b78', u'name': 
u'glance'}, {u'endpoints': [{u'url': u'http://172.17.1.1:9696', u'interface': 
u'admin', u'region': u'CEURegion', u'region_id': u'CEURegion', u'id': 
u'15831ae42aa143cb94f0d3adc1b353fb'}, {u'url': u'http://172.17.1.1:9696', 
u'interface': u'public', u'region': u'CEURegion', u'region_id': u'CEURegion', 
u'id': u'74bf11a2b9334256bf9abdc618556e2b'}, {u'url': 
u'http://172.17.1.1:9696', u'interface': u'internal', u'region': u'CEURegion', 
u'region_id': u'CEURegion', u'id': u'd326b2c9fa614cad8586c79ab76a66a0'}], 
u'type': u'network', u'id': u'0e75266a6c284a289edb11b1c627c53f', u'name': 
u'neutron'}, {u'endpoints': [{u'url': 
u'http://172.17.1.1:8774/v2/b408e8f5cb56432a96767c83583ea051', u'interface': 
u'int
 ernal', u'region': u'CEURegion', u'region_id': u'CEURegion', u'id': 
u'083e629299bb429ba6ad1bf03451e8db'}, {u'url': 
u'http://172.17.1.1:8774/v2/b408e8f5cb56432a96767c83583ea051', u'interface': 
u'public', u'region': u'CEURegion', u'region_id': u'CEURegion', u'id': 
u'3942023115194893bb6762d02e47524a'}, {u'url': 
u'http://172.17.1.1:8774/v2/b408e8f5cb56432a96767c83583ea051', u'interface': 
u'admin', u'region': u'CEURegion', u'region_id': u'CEURegion', u'id': 
u'b6f4f8a8bc33444b862cd3d9360c67e2'}], u'type': u'compute', u'id': 
u'2a259406aeef4667873d06ef361a1c44', u'name': u'nova'}, {u'endpoints': 
[{u'url': u'http://172.17.1.1:8776/v2/b408e8f5cb56432a96767c83583ea051', 
u'interface': u'admin', u'region': u'CEURegion', u'region_id': u'CEURegion', 
u'id': u'919bab67f54b4973807dcefb37fc22aa'}, {u'url': 
u'http://172.17.1.1:8776/v2/b408e8f5cb56432a96767c83583ea051', u'interface': 
u'internal', u'region': u'CEURegion', u'region_id': u'CEURegion', u'id': 
u'ce0963a3cfba44deb818f7d0551d8bdf'}, {u'url': u
 'http://172.17.1.1:8776/v2/b408e8f5cb56432a96767c83583ea051', u'interface': 
u'public', u'region': u'CEURegion', u'region_id': u'CEURegion', u'id': 
u'e98842d6a18840f7a1d0595957eaa4d6'}], u'type': u'volume', u'id': 
u'5e3afcf192bb4ad8ad9bfd589b0641b9', u'name': u'cinder'}, {u'endpoints': 
[{u'url': u'http://172.17.1.1:8000/v1', u'interface': u'public', u'region': 
u'CEURegion', u'region_id': u'CEURegion', u'id': 
u'4385c791314e4f8a926411b9f4707513'}, {u'url': u'http://172.17.1.1:8000/v1', 
u'interface': u'admin', u'region': u'CEURegion', u'region_id': u'CEURegion', 
u'id': u'a1ed10e71e3d4c81b4f3e175f4c29e3f'}, {u'url': 
u'http://172.17.1.1:8000/v1', u'interface': u'internal', u'region': 
u'CEURegion', u'region_id': u'CEURegion', u'id': 
u'd6d2e7dc54fc4abbb99d93f95d795340'}], u'type': u'cloudformation', u'id': 
u'7a80a5d594414d6fb07f5332bca1d0e1', u'name': u'heat-cfn'}, {u'endpoints': 
[{u'url': u'http://172.17.1.1:5000/v2.0', u'interface': u'public', u'region': 
u'CEURegion', u'region_id': u'CEUR
 egion', u'id': u'0fef9f451d9b42bcaeea6addda1c3870'}, {u'url': 
u'http://172.17.1.1:35357/v2.0', u'interface': u'admin', u'region': 
u'CEURegion', 

[Yahoo-eng-team] [Bug 1359651] Re: xenapi: still get MAP_DUPLICATE_KEY in some edge cases

2015-11-14 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1359651

Title:
  xenapi: still get MAP_DUPLICATE_KEY in some edge cases

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Older version of XenServer require us to keep the live copy of
  xenstore updated in sync with the copy of xenstore recorded in the
  xenapi metadata for that VM.

  Code inspection has shown that we don't consistently keep those two
  copies up to date.

  While its hard to reproduce this errors, (add_ip_address_to_vm seems
  particuarly likely to hit issues), it seems best to tidy up the
  xenstore writing code so we consistently add/remove keys from the live
  copy and the copy in xenapi.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1359651/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385295] [NEW] use_syslog=True does not log to syslog via /dev/log anymore

2015-11-14 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

python-oslo.log SRU:
[Impact]

 * Nova services not able to write log to syslog

[Test Case]

 * 1. Set use_syslog to True in nova.conf/cinder.conf
   2. stop rsyslog service
   3. restart nova/cinder services
   4. restart rsyslog service
   5. Log is not written to syslog after rsyslog is brought up.

[Regression Potential]

 * none


Reproduced on:
https://github.com/openstack-dev/devstack 
514c82030cf04da742d16582a23cc64962fdbda1
/opt/stack/keystone/keystone.egg-info/PKG-INFO:Version: 2015.1.dev95.g20173b1
/opt/stack/heat/heat.egg-info/PKG-INFO:Version: 2015.1.dev213.g8354c98
/opt/stack/glance/glance.egg-info/PKG-INFO:Version: 2015.1.dev88.g6bedcea
/opt/stack/cinder/cinder.egg-info/PKG-INFO:Version: 2015.1.dev110.gc105259

How to reproduce:
Set
 use_syslog=True
 syslog_log_facility=LOG_SYSLOG
for Openstack config files and restart processes inside their screens

Expected:
Openstack logs logged to syslog as well

Actual:
Nothing goes to syslog

** Affects: oslo.log
 Importance: High
 Assignee: John Stanford (jxstanford)
 Status: Fix Released

** Affects: cinder (Ubuntu)
 Importance: Medium
 Status: Invalid

** Affects: nova
 Importance: Medium
 Status: Invalid

** Affects: python-oslo.log (Ubuntu)
 Importance: High
 Assignee: Liang Chen (cbjchen)
 Status: In Progress


** Tags: in-stable-kilo patch
-- 
use_syslog=True does not log to syslog via /dev/log anymore
https://bugs.launchpad.net/bugs/1385295
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385295] Re: use_syslog=True does not log to syslog via /dev/log anymore

2015-11-14 Thread Alan Pevec
** Package changed: nova (Ubuntu) => nova

** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Status: New => Fix Committed

** Changed in: nova/juno
   Importance: Undecided => Medium

** Changed in: nova/juno
 Assignee: (unassigned) => Pádraig Brady (p-draigbrady)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1385295

Title:
  use_syslog=True does not log to syslog via /dev/log anymore

Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Compute (nova) juno series:
  Fix Committed
Status in oslo.log:
  Fix Released
Status in cinder package in Ubuntu:
  Invalid
Status in python-oslo.log package in Ubuntu:
  In Progress

Bug description:
  python-oslo.log SRU:
  [Impact]

   * Nova services not able to write log to syslog

  [Test Case]

   * 1. Set use_syslog to True in nova.conf/cinder.conf
 2. stop rsyslog service
 3. restart nova/cinder services
 4. restart rsyslog service
 5. Log is not written to syslog after rsyslog is brought up.

  [Regression Potential]

   * none

  
  Reproduced on:
  https://github.com/openstack-dev/devstack 
514c82030cf04da742d16582a23cc64962fdbda1
  /opt/stack/keystone/keystone.egg-info/PKG-INFO:Version: 2015.1.dev95.g20173b1
  /opt/stack/heat/heat.egg-info/PKG-INFO:Version: 2015.1.dev213.g8354c98
  /opt/stack/glance/glance.egg-info/PKG-INFO:Version: 2015.1.dev88.g6bedcea
  /opt/stack/cinder/cinder.egg-info/PKG-INFO:Version: 2015.1.dev110.gc105259

  How to reproduce:
  Set
   use_syslog=True
   syslog_log_facility=LOG_SYSLOG
  for Openstack config files and restart processes inside their screens

  Expected:
  Openstack logs logged to syslog as well

  Actual:
  Nothing goes to syslog

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1385295/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485883] Re: NSX-mh: bad retry behaviour on controller connection issues

2015-11-14 Thread Alan Pevec
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New => Invalid

** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Status: New => Fix Committed

** Changed in: neutron/juno
 Assignee: (unassigned) => Salvatore Orlando (salvatore-orlando)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1485883

Title:
  NSX-mh: bad retry behaviour on controller connection issues

Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Committed
Status in vmware-nsx:
  Fix Committed

Bug description:
  If the connection to a NSX-mh controller fails - for instance because
  there is a network issue or the controller is unreachable - the
  neutron plugin keeps retrying the connection to the same controller
  until it times out, whereas a  correct behaviour would be to try to
  connect to the other controllers in the cluster.

  The issue can be reproduced with the following steps:
  1. Three Controllers in the cluster 10.25.56.223,10.25.101.133,10.25.56.222
  2. Neutron net-create dummy-1 from openstack cli
  3. Vnc into controller-1, ifconfig eth0 down
  4. Do neutron net-create dummy-2 from openstack cli

  The API requests were forwarded to 10.25.56.223 originally. eth0
  interface was shutdown on 10.25.56.223. But the requests continued to
  get forwarded to the same Controllers and timed out.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1485883/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1516260] [NEW] L3 agent sync_routers timeouts may cause cluster to fall down

2015-11-14 Thread Assaf Muller
Public bug reported:

L3 agent 'sync_routers' RPC call is sent when the agent starts or when
an exception occurs. It uses a default timeout of 60 seconds (An Oslo
messaging config option). At scale the server can take a long time to
answer, causing a timeout and the message is sent again, causing a
cascading failure and the situation does not resolve itself. The
sync_routers server RPC response was optimized to mitigate this, it
could also be helpful to simply increase the timeout.

** Affects: neutron
 Importance: Low
 Assignee: Assaf Muller (amuller)
 Status: New


** Tags: l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1516260

Title:
  L3 agent sync_routers timeouts may cause cluster to fall down

Status in neutron:
  New

Bug description:
  L3 agent 'sync_routers' RPC call is sent when the agent starts or when
  an exception occurs. It uses a default timeout of 60 seconds (An Oslo
  messaging config option). At scale the server can take a long time to
  answer, causing a timeout and the message is sent again, causing a
  cascading failure and the situation does not resolve itself. The
  sync_routers server RPC response was optimized to mitigate this, it
  could also be helpful to simply increase the timeout.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1516260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1423427] Re: tempest baremetal client is creating node with wrong property keys

2015-11-14 Thread Alan Pevec
** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => Invalid

** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Status: New => Fix Committed

** Changed in: nova/juno
   Importance: Undecided => High

** Changed in: nova/juno
 Assignee: (unassigned) => Adam Gandelman (gandelman-a)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1423427

Title:
  tempest baremetal client is creating node with wrong property keys

Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Compute (nova) juno series:
  Fix Committed
Status in tempest:
  Fix Released

Bug description:
  A new test has been added to tempest to stress the os-baremetal-nodes
  API extension.  The test periodically fails in the gate with traceback
  in n-api log:

  [req-01dcd35b-55f4-4688-ba18-7fe0c6defd52 
BaremetalNodesAdminTestJSON-1864409967 BaremetalNodesAdminTestJSON-1481542636] 
Caught error: 'cpus'
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack Traceback (most recent 
call last):
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/__init__.py", line 125, in __call__
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack return 
req.get_response(self.application)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1320, in send
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1284, in 
call_application
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py", line 
977, in __call__
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack return 
self._call_app(env, start_response)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py", line 
902, in _call_app
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack return 
self._app(env, _fake_start_response)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/routes/middleware.py", line 136, in 
__call__
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack return 
self.func(req, *args, **kwargs)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/wsgi.py", line 749, in __call__
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack content_type, body, 
accept)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/wsgi.py", line 814, in _process_stack
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/wsgi.py", line 904, in dispatch
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack return 
method(req=request, **action_args)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/compute/contrib/baremetal_nodes.py", 
line 123, in index
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack 'cpus': 
inode.properties['cpus'],
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack KeyError: 'cpus'
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack

  This hits only periodically and only when another tempest baremetal
  test is running in parallel to the new test.  The other tests
  (tempest.api.baremetal.*) create some nodes in Ironic with node
  properties that are not the standard resource properties the
  nova->ironic proxy expects (from
  nova/api/openstack/compute/contrib/baremetal_nodes.py:201):

for inode in ironic_nodes:
  node = 

[Yahoo-eng-team] [Bug 1361211] Re: Hyper-V agent does not add new VLAN ids to the external port's trunked list on Hyper-V 2008 R2

2015-11-14 Thread Alan Pevec
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New => Invalid

** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Status: New => Fix Committed

** Changed in: neutron/juno
   Importance: Undecided => Medium

** Changed in: neutron/juno
 Assignee: (unassigned) => Claudiu Belu (cbelu)

** Tags removed: in-stable-juno

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361211

Title:
  Hyper-V agent does not add new VLAN ids to the external port's trunked
  list on Hyper-V 2008 R2

Status in networking-hyperv:
  Fix Released
Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Committed

Bug description:
  This issue affects Hyper-V 2008 R2 and does not affect Hyper-V 2012
  and above.

  The Hyper-V agent is correctly setting the VLAN ID and access mode
  settings on the vmswitch ports associated with a VM, but not on the
  trunked list associated with an external port. This is a required
  configuration.

  A workaround consists in setting the external port trunked list to
  contain all possible VLAN ids expected to be used in neutron's network
  configuration as provided by the following script:

  https://github.com/cloudbase/devstack-hyperv-
  incubator/blob/master/trunked_vlans_workaround_2008r2.ps1

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-hyperv/+bug/1361211/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1481346] Re: MH: router delete might return a 500 error

2015-11-14 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Released => Fix Committed

** Changed in: neutron/juno
Milestone: None => 2014.2.4

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1481346

Title:
  MH: router delete might return a 500 error

Status in neutron:
  New
Status in neutron juno series:
  Fix Committed
Status in vmware-nsx:
  New

Bug description:
  If a logical router has been removed from the backend, and the DB is
  an inconsistent state where no NSX mapping is stored for the neutron
  logical router, the backend will fail when attempting eletion of the
  router, causing the neutron operation to return a 500.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1481346/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471050] Re: VLANs are not configured on VM migration

2015-11-14 Thread Alan Pevec
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New => Invalid

** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Status: New => Fix Committed

** Changed in: neutron/juno
 Assignee: (unassigned) => Shashank Hegde (hegde-shashank)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1471050

Title:
  VLANs are not configured on VM migration

Status in networking-arista:
  New
Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Committed

Bug description:
  Whenever a VM migrates from one compute node to the other, the VLAN is
  not provisioned on the new compute node. The correct behaviour should
  be to remove the VLAN on the interface on the old switch interface and
  provision the VLAN on the new switch interface.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-arista/+bug/1471050/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374108] Re: Hyper-V agent cannot disconnect orphaned switch ports

2015-11-14 Thread Alan Pevec
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New => Invalid

** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Status: New => Fix Committed

** Changed in: neutron/juno
   Importance: Undecided => Low

** Changed in: neutron/juno
 Assignee: (unassigned) => Claudiu Belu (cbelu)

** Tags removed: in-stable-juno

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1374108

Title:
  Hyper-V agent cannot disconnect orphaned switch ports

Status in networking-hyperv:
  Fix Released
Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Committed

Bug description:
  On Windows / Hyper-V Server 2008 R2, when a switch port have to be 
disconnected because the VM using it was removed,
  DisconnectSwitchPort will fail, returning an error code and a HyperVException 
is raised. If the exception is raised, the switch port is not removed and will 
make the WMI operations more expensive.

  If the VM's VNIC has been removed, disconnecting the switch port is no
  longer necessary and it should be removed.

  Trace:
  http://paste.openstack.org/show/115297/

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-hyperv/+bug/1374108/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462973] Re: Network gateway flat connection fail because of None tenant_id

2015-11-14 Thread Alan Pevec
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New => Invalid

** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Status: New => Fix Committed

** Changed in: neutron/juno
 Assignee: (unassigned) => Salvatore Orlando (salvatore-orlando)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1462973

Title:
  Network gateway flat connection fail because of None tenant_id

Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Committed
Status in vmware-nsx:
  Fix Committed

Bug description:
  The NSX-mh backend does not accept "None" values for tags.
  Tags are applied to all NSX-mh ports. In particular there is always a tag 
with the neutron tenant_id (q_tenant_id)

  _get_tenant_id_for_create now in admin context returns the tenant_id of the 
resource being created, if there is one.
  Otherwise still returns context.tenant_id.
  The default L2 gateway unfortunately does not have a tenant_id, but has the 
tenant_id attribute in its data structure.
  This means that _get_tenant_id_for_create will return None, and NSX-mh will 
reject the request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1462973/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462974] Re: Network gateway vlan connection fails because of int conversion

2015-11-14 Thread Alan Pevec
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New => Invalid

** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Importance: Undecided => High

** Changed in: neutron/juno
 Assignee: (unassigned) => Salvatore Orlando (salvatore-orlando)

** Changed in: neutron/juno
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1462974

Title:
  Network gateway vlan connection fails because of int conversion

Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Committed
Status in vmware-nsx:
  Fix Committed

Bug description:
  So far there has been an implicit assumption that segmentation_id would be an 
integer.
  In fact, it is a string value, which was been passed down to NSX.

  This means that passing a string value, like "xyz", rather than a validation 
error would have triggered a backend error.
  Moreover, the check for validity of the VLAN tag is now in the form min < tag 
< max, and this does not work unless tag is converted to integer.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1462974/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394900] Re: cinder disabled, many popups about missing volume service

2015-11-14 Thread Alan Pevec
** Changed in: horizon/juno
   Status: Fix Released => Fix Committed

** Changed in: horizon/juno
Milestone: 2014.2.2 => 2014.2.4

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1394900

Title:
  cinder disabled, many popups about missing volume service

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Committed

Bug description:
  In an enviroment, where cinder is disabled, I'm getting many error popups:
  "Error: Invalid service catalog service: volume"

  keystone catalog | grep Service
  Service: compute
  Service: network
  Service: computev3
  Service: image
  Service: metering
  Service: ec2
  Service: orchestration
  Service: identity

  This is seen in a juno environment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1394900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414218] Re: Remove extraneous trace in linux/dhcp.py

2015-11-14 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Released => Fix Committed

** Changed in: neutron/juno
Milestone: 2014.2.3 => 2014.2.4

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1414218

Title:
  Remove extraneous trace in linux/dhcp.py

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Committed

Bug description:
  The debug tracepoint in Dnsmasq._output_hosts_file is extraneous and
  causes unnecessary performance overhead due to string formating when
  creating lots (> 1000) ports at one time.

  The trace point is unnecessary since the data is being written to disk
  and the file can be examined in a worst case scenario. The added
  performance overhead is an order of magnitude in difference (~.5
  seconds versus ~.05 seconds at 1500 ports).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1414218/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381419] Re: glance.tests.unit.v2.test_images_resource.TestImagesController.test_index_with_marker failed in periodic stable job run

2015-11-14 Thread Alan Pevec
** Also affects: glance/juno
   Importance: Undecided
   Status: New

** No longer affects: glance/juno

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1381419

Title:
  
glance.tests.unit.v2.test_images_resource.TestImagesController.test_index_with_marker
  failed in periodic stable job run

Status in Glance:
  Confirmed

Bug description:
  The traceback is as follows:

  ft1.1942: 
glance.tests.unit.v2.test_images_resource.TestImagesController.test_index_with_marker_StringException:
 Traceback (most recent call last):
File "glance/tests/unit/v2/test_images_resource.py", line 378, in 
test_index_with_marker
  self.assertTrue(UUID2 in actual)
File "/usr/lib64/python2.6/unittest.py", line 324, in failUnless
  if not expr: raise self.failureException, msg
  AssertionError

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1381419/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >