[Yahoo-eng-team] [Bug 1405294] Re: Live migration with attached volume peforms breaking rollback on failure

2016-04-23 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1405294

Title:
  Live migration with attached volume peforms breaking rollback on
  failure

Status in OpenStack Compute (nova):
  Expired

Bug description:
  During live migration with attached volume, nova ignores initialize
  connection errors and does not roll back.

  Steps:
  * Create a nova instance
  * Attach a cinder volume
  * Perform ‘nova live-migration’ to a different backend
-Cause a failure in the ‘initialize_connection’ call to the new host
  * Wait for nova to call ‘terminate_connection’ on the connection to the 
original host

  Result:
  * Instance remains on original host with Cinder volume attached according to 
Cinder but no longer mapped on the backend. This removes connectivity from 
storage to the host and can cause data loss.

  
  Triage:
  What seems to be happening is that Nova is not stopping the migration when 
receiving an error from Cinder and ends up calling terminate_connection for the 
src host when it should not be.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1405294/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541301] Re: random allocation of floating ip from different external subnet

2016-04-23 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1541301

Title:
  random allocation of floating ip from different external subnet

Status in neutron:
  Expired

Bug description:
  I have created THREE different subnets(subnet1, subnet2,subnet3) in
  "EXTERNAL" Network.

  When associating floating ip for an instance, neutron randomly picks
  IPs from all the subnets.

  But I need to mention specified EXTERNAL subnet while associating.

  As of now only external network name can be selected, but the subnets
  can not be user defined. when we try to associate floating ip to the
  instance.

  To excecute an important use case we are stuck here in allocating
  required FLOATING IP.

  Kindly resolve.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1541301/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548278] Re: rabbitmq q-agent-notifier-port-update_fanout_91e5c8311b1b47a2b39ede94dad9a56b is blocked

2016-04-23 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1548278

Title:
  rabbitmq q-agent-notifier-port-
  update_fanout_91e5c8311b1b47a2b39ede94dad9a56b is blocked

Status in neutron:
  Expired

Bug description:
  Queue q-agent-notifier-port-
  update_fanout_91e5c8311b1b47a2b39ede94dad9a56b be blocked ( please
  refer to the attached picture)

  Version: RabbitMQ 3.6.0 release, openstack kilo

  This phenomeno comes up sometimes when in the large-scale environment,
  when one rabbitmq message-queue be created,if no consume binded to it but
  the producers publish messages to queue continuously, then the queue will not 
be dropped!
  If I want the queue which hasn't been binded with consumers or producers to 
be dropped,

  how can I do?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1548278/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1570748] Re: Bug: resize instance after edit flavor with horizon

2016-04-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/307518
Committed: 
https://git.openstack.org/cgit/openstack/tempest/commit/?id=98c32b7860f03bedbe2d4d8e35ec53cfa0c0c5cb
Submitter: Jenkins
Branch:master

commit 98c32b7860f03bedbe2d4d8e35ec53cfa0c0c5cb
Author: Matt Riedemann 
Date:   Mon Apr 18 19:34:47 2016 -0400

Add a test for reverting a resize with a deleted flavor

A user should be able to boot a server, resize it and
then reject the resize even though the original flavor
used to boot the instance is deleted. This is because
the old flavor information is stored with the instance
in the nova database, so the original flavor doesn't
actually need to exist anymore.

Depends-On: I5f95021410a309ac07fe9f474cbcd0214d1af208

Change-Id: I356411f96a601f1443d75ac90e42567bef1f8228
Closes-Bug: #1570748


** Changed in: tempest
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1570748

Title:
  Bug: resize instance after edit flavor with horizon

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Committed
Status in OpenStack Compute (nova) liberty series:
  Fix Committed
Status in OpenStack Compute (nova) mitaka series:
  Fix Committed
Status in tempest:
  Fix Released

Bug description:
  Error occured when resize instance after edit flavor with horizon (and
  also delete flavor used by instance)

  Reproduce step :

  1. create flavor A
  2. boot instance using flavor A
  3. edit flavor with horizon (or delete flavor A)
  -> the result is same to edit or to delelet flavor because edit flavor 
means delete/recreate flavor)
  4. resize or migrate instance
  5. Error occured

  Log : 
  nova-compute.log
 File "/opt/openstack/src/nova/nova/conductor/manager.py", line 422, in 
_object_dispatch
   return getattr(target, method)(*args, **kwargs)

 File "/opt/openstack/src/nova/nova/objects/base.py", line 163, in wrapper
   result = fn(cls, context, *args, **kwargs)

 File "/opt/openstack/src/nova/nova/objects/flavor.py", line 132, in 
get_by_id
   db_flavor = db.flavor_get(context, id)

 File "/opt/openstack/src/nova/nova/db/api.py", line 1479, in flavor_get
   return IMPL.flavor_get(context, id)

 File "/opt/openstack/src/nova/nova/db/sqlalchemy/api.py", line 233, in 
wrapper
   return f(*args, **kwargs)

 File "/opt/openstack/src/nova/nova/db/sqlalchemy/api.py", line 4732, in 
flavor_get
   raise exception.FlavorNotFound(flavor_id=id)

   FlavorNotFound: Flavor 7 could not be found.

  
  This Error is occured because of below code:
  /opt/openstack/src/nova/nova/compute/manager.py

  def resize_instance(self, context, instance, image,
  reservations, migration, instance_type,
  clean_shutdown=True):
  
  if (not instance_type or
  not isinstance(instance_type, objects.Flavor)):
  instance_type = objects.Flavor.get_by_id(
  context, migration['new_instance_type_id'])
  

  I think that deleted flavor should be taken when resize instance. 
  I tested this in stable/kilo, but I think stable/liberty and stable/mitaka 
has same bug because source code is not changed.

  thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1570748/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1574092] [NEW] No router namespace after creating legacy router

2016-04-23 Thread Inessa Vasilevskaya
Public bug reported:

In case there are some temporary MQ connectivity problems during router
creation, notification sent by l3_notifier via rpc cast gets lost. This
leads to the absence of qrouter namespace on controllers.

The issue was first faced on mos HA (3 controllers) build -
https://bugs.launchpad.net/mos/10.0.x/+bug/1529820

** Affects: neutron
 Importance: Undecided
 Assignee: Inessa Vasilevskaya (ivasilevskaya)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Inessa Vasilevskaya (ivasilevskaya)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1574092

Title:
  No router namespace after creating legacy router

Status in neutron:
  New

Bug description:
  In case there are some temporary MQ connectivity problems during
  router creation, notification sent by l3_notifier via rpc cast gets
  lost. This leads to the absence of qrouter namespace on controllers.

  The issue was first faced on mos HA (3 controllers) build -
  https://bugs.launchpad.net/mos/10.0.x/+bug/1529820

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1574092/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561196] Re: breadcrumb on subnet page has improper navigation

2016-04-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/296964
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=b9d81628d7f824a65bf091914b6956df797f7f0f
Submitter: Jenkins
Branch:master

commit b9d81628d7f824a65bf091914b6956df797f7f0f
Author: Rob Cresswell 
Date:   Thu Mar 24 09:11:08 2016 +

Fix incorrect breadcrumb on Admin > Details

Adds a small helper function to retrieve the network details
breadcrumb url. Also fixes the incorrect link on the "Network ID"
field. Also makes `get_redirect_url` a staticmethod, as with the
project version.

Change-Id: Ica3cab4f97af7e0047ac3e49a848edae38c62789
Closes-Bug: 1561196


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1561196

Title:
  breadcrumb on subnet page has improper navigation

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  On Admin/Networks/[network detail]/[subnet detail] when clicking the
  breadcrumb element with the network name, navigation goes to
  project/networks. This is unexpected - it should remain on
  Admin/Networks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1561196/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572036] Re: Check that row is defined when wait cell status

2016-04-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/307667
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=5d8161147ee3cc41d4853aabb6f049f2093892e7
Submitter: Jenkins
Branch:master

commit 5d8161147ee3cc41d4853aabb6f049f2093892e7
Author: Sergei Chipiga 
Date:   Tue Apr 19 12:12:04 2016 +0300

Check that row is defined when wait cell status

Change-Id: I848b2d14138a362040abc516e32061d0b3570394
Closes-Bug: #1572036


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1572036

Title:
  Check that row is defined when wait cell status

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  During cell status waiting we need to verify that row is defined
  before asking row.cells. DOM is inconstant structure in time, and we
  may call _get_row, when table is not in DOM (for ex, when table is
  reloaded). So we should consider that situation and don't raise
  Exception that NoneType hasn't 'cells' attribute.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1572036/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1573949] [NEW] lbaas: better to close a socket explicitly rather than implicitly when they are garbage-collected

2016-04-23 Thread ding bo
Public bug reported:

https://github.com/openstack/neutron-
lbaas/blob/master/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py#L205
:

def _get_stats_from_socket(self, socket_path, entity_type):
try:
s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
s.connect(socket_path)
s.send('show stat -1 %s -1\n' % entity_type)
raw_stats = ''
chunk_size = 1024
while True:
chunk = s.recv(chunk_size)
raw_stats += chunk
if len(chunk) < chunk_size:
break

return self._parse_stats(raw_stats)
except socket.error as e:
LOG.warning(_LW('Error while connecting to stats socket: %s'), e)
return {}

in this function, a socket connection is created but it is not closed
explicitly. It is better to close it when all things have been done

** Affects: neutron
 Importance: Undecided
 Assignee: ding bo (longddropt)
 Status: New

** Summary changed:

- lbaas: better to close a socket explicitly rather than implicitly when they 
are  they are garbage-collected
+ lbaas: better to close a socket explicitly rather than implicitly when they 
are garbage-collected

** Changed in: neutron
 Assignee: (unassigned) => ding bo (longddropt)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1573949

Title:
  lbaas: better to close a socket explicitly rather than implicitly when
  they are garbage-collected

Status in neutron:
  New

Bug description:
  https://github.com/openstack/neutron-
  
lbaas/blob/master/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py#L205
  :

  def _get_stats_from_socket(self, socket_path, entity_type):
  try:
  s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
  s.connect(socket_path)
  s.send('show stat -1 %s -1\n' % entity_type)
  raw_stats = ''
  chunk_size = 1024
  while True:
  chunk = s.recv(chunk_size)
  raw_stats += chunk
  if len(chunk) < chunk_size:
  break

  return self._parse_stats(raw_stats)
  except socket.error as e:
  LOG.warning(_LW('Error while connecting to stats socket: %s'), e)
  return {}

  in this function, a socket connection is created but it is not closed
  explicitly. It is better to close it when all things have been done

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1573949/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1573944] [NEW] target-lun id of volume changed when live-migration failed

2016-04-23 Thread jangpro2
Public bug reported:


Description
===
target-lun id of volume changed when live-migration failed
I tried to live-migrate vm with attahed volume, but failed.
I think nova-compute should rollback vm as old status before live-migration.
But target-lun id of volume didn't changed as old lun-id.

Environment
===
- OpenStack Release : Liberty
- OS : Ubuntu 14.04.2 LTS
- Hypervisor : KVM
- Cinder Storage : iSCSI (EMC VNX)

Steps to reproduce
==
1. Create VM and Attach Volume to VM(target-lun id is 174)
2. Try to Live-migration... but failed (target-lun id changed 97)
3. Try to rollback (target-lun id still is 97)
  * target-lun id didn't changed old one.

> before live-migration(nova.block_device_mapping table.connection_info)
{"driver_volume_type": "iscsi", "serial": 
"6352f542-819f-477e-a588-b15d75008178", 
"data": {"target_luns": [174, 174, 174, 174], 
"device_path": "/dev/mapper/360060160a7d03800b0dcf791f008e611", 
"target_discovered": true, "encrypted": false, "qos_specs": null, 
"target_iqn": "iqn.1992-04.com.emc:cx.ckm00142100690.b0", "target_portal": 
"x.x.x.x:3260", "volume_id": "6352f542-819f-477e-a588-b15d75008178", 
"target_lun": 174, "access_mode": "rw", 
"target_portals": ["x.x.x.x:3260", "x.x.x.x:3260", "x.x.x.x:3260", 
"x.x.x.x:3260"]}}

(*) target_lun id was 174.

> after live-migration and rollback (nova.block_device_mapping 
> table.connection_info)
{"driver_volume_type": "iscsi", "serial": 
"6352f542-819f-477e-a588-b15d75008178", 
"data": {"target_luns": [97, 97, 97, 97], 
"target_discovered": true, "encrypted": false, "qos_specs": null, "target_iqn": 
"iqn.1992-04.com.emc:cx.ckm00142100690.b1", 
"target_portal": "x.x.x.x:3260", "volume_id": 
"6352f542-819f-477e-a588-b15d75008178", 
"target_lun": 97, "access_mode": "rw", 
"target_portals":  ["x.x.x.x:3260", "x.x.x.x:3260", "x.x.x.x:3260", 
"x.x.x.x:3260"]}}

(*) target_lun id was changed 97.

Expected result
===
If live-migration success, it is normal status that target-lun id changes 97.

Actual result
=
otherwise target-lun id must change as old target id(174).

In this environment, if nova reboot the vm, vm will fail rebooting.
Because target-lun(97) doesn't exist in the server.
Or
If target-lun(97) used by other vm exist, vm will attach volume used by other 
vm.
It occurs critical situation that single volume attached by multi-vm.

** Affects: nova
 Importance: Undecided
 Assignee: jangpro2 (jangseon-ryu)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => jangpro2 (jangseon-ryu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1573944

Title:
  target-lun id of volume changed when live-migration failed

Status in OpenStack Compute (nova):
  New

Bug description:
  
  Description
  ===
  target-lun id of volume changed when live-migration failed
  I tried to live-migrate vm with attahed volume, but failed.
  I think nova-compute should rollback vm as old status before live-migration.
  But target-lun id of volume didn't changed as old lun-id.

  Environment
  ===
  - OpenStack Release : Liberty
  - OS : Ubuntu 14.04.2 LTS
  - Hypervisor : KVM
  - Cinder Storage : iSCSI (EMC VNX)

  Steps to reproduce
  ==
  1. Create VM and Attach Volume to VM(target-lun id is 174)
  2. Try to Live-migration... but failed (target-lun id changed 97)
  3. Try to rollback (target-lun id still is 97)
* target-lun id didn't changed old one.

  > before live-migration(nova.block_device_mapping table.connection_info)
  {"driver_volume_type": "iscsi", "serial": 
"6352f542-819f-477e-a588-b15d75008178", 
  "data": {"target_luns": [174, 174, 174, 174], 
  "device_path": "/dev/mapper/360060160a7d03800b0dcf791f008e611", 
"target_discovered": true, "encrypted": false, "qos_specs": null, 
  "target_iqn": "iqn.1992-04.com.emc:cx.ckm00142100690.b0", "target_portal": 
"x.x.x.x:3260", "volume_id": "6352f542-819f-477e-a588-b15d75008178", 
  "target_lun": 174, "access_mode": "rw", 
  "target_portals": ["x.x.x.x:3260", "x.x.x.x:3260", "x.x.x.x:3260", 
"x.x.x.x:3260"]}}

  (*) target_lun id was 174.

  > after live-migration and rollback (nova.block_device_mapping 
table.connection_info)
  {"driver_volume_type": "iscsi", "serial": 
"6352f542-819f-477e-a588-b15d75008178", 
  "data": {"target_luns": [97, 97, 97, 97], 
  "target_discovered": true, "encrypted": false, "qos_specs": null, 
"target_iqn": "iqn.1992-04.com.emc:cx.ckm00142100690.b1", 
  "target_portal": "x.x.x.x:3260", "volume_id": 
"6352f542-819f-477e-a588-b15d75008178", 
  "target_lun": 97, "access_mode": "rw", 
  "target_portals":  ["x.x.x.x:3260", "x.x.x.x:3260", "x.x.x.x:3260", 
"x.x.x.x:3260"]}}

  (*) target_lun id was changed 97.

  Expected result
  ===
  If live-migration success, it is normal status that target-lun id changes 97.

  Actual result