[Yahoo-eng-team] [Bug 1405091] [NEW] Create the same member the same address, protocol-port and pool id should not return 500 error

2014-12-23 Thread KaiLin
Public bug reported:

SYMPTOM:
1.Create the first member by using  address, protocol-port and one pool 
id.(successfully)
2.Create the second member by using the same address, protocol-port and one 
pool id.(failure)

And return Internal Server Error (HTTP 500)
I think it should not return 500 error,it may be 409 error.

** Affects: neutron
 Importance: Undecided
 Status: New

** Summary changed:

- Create the same member use one pool id should not return 500 error
+ Create the same member the same address, protocol-port and pool id should not 
return 500 error

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1405091

Title:
  Create the same member the same address, protocol-port and pool id
  should not return 500 error

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  SYMPTOM:
  1.Create the first member by using  address, protocol-port and one pool 
id.(successfully)
  2.Create the second member by using the same address, protocol-port and one 
pool id.(failure)

  And return Internal Server Error (HTTP 500)
  I think it should not return 500 error,it may be 409 error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1405091/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404943] Re: 'Error: Invalid service catalog service: volume' if no volume service is defined

2014-12-23 Thread Julie Pichon
*** This bug is a duplicate of bug 1394900 ***
https://bugs.launchpad.net/bugs/1394900

** This bug has been marked a duplicate of bug 1394900
   cinder disabled, many popups about missing volume service

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1404943

Title:
  'Error: Invalid service catalog service: volume' if no volume service
  is defined

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If openstack installation has no cinder service in endpoint list,
  horizon reports 'Error: Invalid service catalog service: volume' many
  times (after login, each time dialog for new instance is opened).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1404943/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405107] [NEW] The instance's task_state is deleting all the time, because the error of Instance disappeared during terminate.

2014-12-23 Thread Rong Han ZTE
Public bug reported:

A instance's vm_state is error.  I did delete it, but the task_state is
deleting all the time. The log show that the error is Instance
disappeared during terminate because of InstanceNotFound exception.

1. Relative code is as follows:
@utils.synchronized(instance['uuid'])
def do_terminate_instance(instance, bdms):
try:
self._delete_instance(context, instance, bdms, quotas)
except exception.InstanceNotFound:
LOG.info(_(Instance disappeared during terminate),
 instance=instance)
except Exception as error:
# As we're trying to delete always go to Error if something
# goes wrong that _delete_instance can't handle.
with excutils.save_and_reraise_exception():
LOG.exception(_('Setting instance vm_state to ERROR'),
  instance=instance)
self._set_instance_error_state(context, instance['uuid'])


2. [root@lxlconductor1 instances(keystone_admin)]# nova list
+--+--+++-+---+
| ID   | Name | Status | Task State | Power 
State | Networks  |
+--+--+++-+---+
| e72d62c9-5d54-4bd0-8afc-d45116467ba5 | hanrong2 | ACTIVE | deleting   | 
NOSTATE |   |
+--+--+++-+---+

3. The error log is as follows:
2015-02-21 17:20:41.049 429 AUDIT nova.compute.manager 
[req-621830f5-8d73-4897-90f9-505e4bf9a6e4 7aba40236a4c4494aba4eb0b9365ffee 
be568b8239d147e58f1ef16b6011c93d] [instance: 
e72d62c9-5d54-4bd0-8afc-d45116467ba5] Terminating instance
2015-02-21 17:21:06.031 429 ERROR nova.virt.libvirt.driver [-] [instance: 
e72d62c9-5d54-4bd0-8afc-d45116467ba5] During wait destroy, instance disappeared.
2015-02-21 17:21:06.066 429 INFO urllib3.connectionpool [-] Starting new HTTP 
connection (1): 10.43.114.108
2015-02-21 17:21:06.550 429 INFO nova.virt.libvirt.driver 
[req-621830f5-8d73-4897-90f9-505e4bf9a6e4 7aba40236a4c4494aba4eb0b9365ffee 
be568b8239d147e58f1ef16b6011c93d] [instance: 
e72d62c9-5d54-4bd0-8afc-d45116467ba5] Deletion of 
/var/lib/nova/instances/e72d62c9-5d54-4bd0-8afc-d45116467ba5 complete
2015-02-21 17:21:38.469 429 AUDIT nova.compute.resource_tracker 
[req-5aae7032-2766-4adf-be17-ab059b4d5cae None None] Auditing locally available 
compute resources
2015-02-21 17:21:38.612 429 INFO oslo.messaging._drivers.impl_qpid [-] 
Connected to AMQP server on 10.43.114.108:5671
2015-02-21 17:22:44.876 429 INFO oslo.messaging._drivers.impl_qpid [-] 
Connected to AMQP server on 10.43.114.108:5671
2015-02-21 17:23:05.684 429 INFO nova.compute.manager 
[req-621830f5-8d73-4897-90f9-505e4bf9a6e4 7aba40236a4c4494aba4eb0b9365ffee 
be568b8239d147e58f1ef16b6011c93d] [instance: 
e72d62c9-5d54-4bd0-8afc-d45116467ba5] Instance disappeared during terminate

4. I think if instance is disappeared, delete would be successful.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1405107

Title:
  The instance's task_state is deleting all the time, because the
  error of Instance disappeared during terminate.

Status in OpenStack Compute (Nova):
  New

Bug description:
  A instance's vm_state is error.  I did delete it, but the task_state
  is deleting all the time. The log show that the error is Instance
  disappeared during terminate because of InstanceNotFound exception.

  1. Relative code is as follows:
  @utils.synchronized(instance['uuid'])
  def do_terminate_instance(instance, bdms):
  try:
  self._delete_instance(context, instance, bdms, quotas)
  except exception.InstanceNotFound:
  LOG.info(_(Instance disappeared during terminate),
   instance=instance)
  except Exception as error:
  # As we're trying to delete always go to Error if something
  # goes wrong that _delete_instance can't handle.
  with excutils.save_and_reraise_exception():
  LOG.exception(_('Setting instance vm_state to ERROR'),
instance=instance)
  self._set_instance_error_state(context, instance['uuid'])

  
  2. [root@lxlconductor1 instances(keystone_admin)]# nova list
  
+--+--+++-+---+
  | ID   | Name | Status | Task State | 
Power State | Networks  |
  

[Yahoo-eng-team] [Bug 1405109] [NEW] horizon tries to use security groups even its disabled

2014-12-23 Thread Pavel Gluschak
Public bug reported:

2014.2.1 deployed by packstack on Centos 7.

I completely disabled security groups in both neutron (ml2 plugin) and
nova:

* /etc/neutron/plugin.ini
enable_security_group = False

* /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini
firewall_driver=neutron.agent.firewall.NoopFirewallDriver

* /etc/nova/nova.conf
security_group_api=neutron
firewall_driver=nova.virt.firewall.NoopFirewallDriver

But horizon still shows Security Groups tab in Access  Security and
pops up Error: Unable to retrieve security groups.. The same message
is popped up when I create a new instance.

I set 'enable_security_group': False in /etc/openstack-
dashboard/local_settings and rebooted all openstack nodes for sure, but
this didn't help.

There should be a way in Horizon to completely disable security groups 
references in WebUI:
1) Horizon could detect if security groups are disabled in both nova and neutron
2) An option in Horizon config

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1405109

Title:
  horizon tries to use security groups even its disabled

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  2014.2.1 deployed by packstack on Centos 7.

  I completely disabled security groups in both neutron (ml2 plugin) and
  nova:

  * /etc/neutron/plugin.ini
  enable_security_group = False

  * /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini
  firewall_driver=neutron.agent.firewall.NoopFirewallDriver

  * /etc/nova/nova.conf
  security_group_api=neutron
  firewall_driver=nova.virt.firewall.NoopFirewallDriver

  But horizon still shows Security Groups tab in Access  Security and
  pops up Error: Unable to retrieve security groups.. The same message
  is popped up when I create a new instance.

  I set 'enable_security_group': False in /etc/openstack-
  dashboard/local_settings and rebooted all openstack nodes for sure,
  but this didn't help.

  There should be a way in Horizon to completely disable security groups 
references in WebUI:
  1) Horizon could detect if security groups are disabled in both nova and 
neutron
  2) An option in Horizon config

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1405109/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405135] [NEW] [neutron_lbaas] Neutron lbaas can't update vip session-persistence's cookie_name

2014-12-23 Thread Dongcan Ye
Public bug reported:

When I want to update loadblance vip session_persistence's cookie_name,
neutron-client returns ok, but the database still shows NULL.

Show the vip info:
$ neutron lb-vip-show 38f4d333-66a6-4012-8edf-b8549238aa22
+-+--+
| Field   | Value|
+-+--+
| address | 10.10.136.62 |
| admin_state_up  | True |
| connection_limit| -1   |
| description |  |
| id  | 38f4d333-66a6-4012-8edf-b8549238aa22 |
| name| ye_vip   |
| pool_id | 7f4c278b-1630-4299-9478-b8653d345ec6 |
| port_id | d213d0e6-557d-4b10-8ee9-b2c70ccdc7a8 |
| protocol| HTTP |
| protocol_port   | 80   |
| session_persistence | {type: SOURCE_IP}|
| status  | ACTIVE   |
| status_description  |  |
| subnet_id   | b5017991-b63c-4bd0-a7e5-b3eaa8d81c23 |
| tenant_id   | 5b969b39b06a4528bbd4198315377eb0 |

Use lb-vip-update command update cookie_name[1]:
$ neutron lb-vip-update 38f4d333-66a6-4012-8edf-b8549238aa22  
--session-persistence type=dict type=SOURCE_IP,[cookie_name=test]


In database, as we see it still NULL.

mysql select * from sessionpersistences where 
vip_id=38f4d333-66a6-4012-8edf-b8549238aa22;
+--+---+-+
| vip_id   | type  | cookie_name |
+--+---+-+
| 38f4d333-66a6-4012-8edf-b8549238aa22 | SOURCE_IP | NULL|
+--+---+-+
1 row in set (0.00 sec)

[1]https://wiki.openstack.org/wiki/Neutron/LBaaS/CLI

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1405135

Title:
  [neutron_lbaas] Neutron lbaas can't update vip  session-persistence's
  cookie_name

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When I want to update loadblance vip session_persistence's
  cookie_name, neutron-client returns ok, but the database still shows
  NULL.

  Show the vip info:
  $ neutron lb-vip-show 38f4d333-66a6-4012-8edf-b8549238aa22
  +-+--+
  | Field   | Value|
  +-+--+
  | address | 10.10.136.62 |
  | admin_state_up  | True |
  | connection_limit| -1   |
  | description |  |
  | id  | 38f4d333-66a6-4012-8edf-b8549238aa22 |
  | name| ye_vip   |
  | pool_id | 7f4c278b-1630-4299-9478-b8653d345ec6 |
  | port_id | d213d0e6-557d-4b10-8ee9-b2c70ccdc7a8 |
  | protocol| HTTP |
  | protocol_port   | 80   |
  | session_persistence | {type: SOURCE_IP}|
  | status  | ACTIVE   |
  | status_description  |  |
  | subnet_id   | b5017991-b63c-4bd0-a7e5-b3eaa8d81c23 |
  | tenant_id   | 5b969b39b06a4528bbd4198315377eb0 |

  Use lb-vip-update command update cookie_name[1]:
  $ neutron lb-vip-update 38f4d333-66a6-4012-8edf-b8549238aa22  
--session-persistence type=dict type=SOURCE_IP,[cookie_name=test]

  
  In database, as we see it still NULL.

  mysql select * from sessionpersistences where 
vip_id=38f4d333-66a6-4012-8edf-b8549238aa22;
  +--+---+-+
  | vip_id   | type  | cookie_name |
  +--+---+-+
  | 38f4d333-66a6-4012-8edf-b8549238aa22 | SOURCE_IP | NULL|
  +--+---+-+
  1 row in set (0.00 sec)

  [1]https://wiki.openstack.org/wiki/Neutron/LBaaS/CLI

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1405135/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : 

[Yahoo-eng-team] [Bug 1277285] Re: Agent list API fails with XML when hyper-v bridge mapping key contains a *

2014-12-23 Thread Elena Ezhova
XML support has been dropped in neutron. I think this makes this bug
invalid. Please, reopen if you think otherwise.

** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1277285

Title:
  Agent list API fails with XML when hyper-v bridge mapping key contains
  a *

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  While running tempest tests on a devstack instance with a hyper-v
  compute node connected, the following test:

  tempest.api.network.admin.test_agent_management

  fails with:

  ft10.1: setUpClass 
(tempest.api.network.admin.test_agent_management.AgentManagementTestXML)_StringException:
 Traceback (most recent call last):
File tempest/api/network/admin/test_agent_management.py, line 26, in 
setUpClass
  resp, body = cls.admin_client.list_agents()
File tempest/services/network/network_client_base.py, line 104, in _list
  result = {plural_name: self.deserialize_list(body)}
File tempest/services/network/xml/network_client.py, line 41, in 
deserialize_list
  return parse_array(etree.fromstring(body), self.PLURALS)
File lxml.etree.pyx, line 2754, in lxml.etree.fromstring 
(src/lxml/lxml.etree.c:54631)
File parser.pxi, line 1578, in lxml.etree._parseMemoryDocument 
(src/lxml/lxml.etree.c:82748)
File parser.pxi, line 1457, in lxml.etree._parseDoc 
(src/lxml/lxml.etree.c:81546)
File parser.pxi, line 965, in lxml.etree._BaseParser._parseDoc 
(src/lxml/lxml.etree.c:78216)
File parser.pxi, line 569, in 
lxml.etree._ParserContext._handleParseResultDoc (src/lxml/lxml.etree.c:74472)
File parser.pxi, line 650, in lxml.etree._handleParseResult 
(src/lxml/lxml.etree.c:75363)
File parser.pxi, line 590, in lxml.etree._raiseParseError 
(src/lxml/lxml.etree.c:74696)
  XMLSyntaxError: StartTag: invalid element name, line 2, column 2346

  Further investigation reveiled that the XML that was generated
  contained an invalid element (.*br100/.*). Sample XML data cn be
  found at the following pastebin address:

  http://paste.openstack.org/show/62882/

  The line that defines the dictionarry from which the element name is
  being defined is:

  
https://github.com/openstack/neutron/blob/4cdccd69a45aec19d547c10f29f61359b69ad6c1/neutron/plugins/hyperv/agent/hyperv_neutron_agent.py#L91

  Although the issue is generated in the hyper-v agent, this looks more
  like a XML general marshalling issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1277285/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405197] [NEW] Race between delete_subnet and delete_network

2014-12-23 Thread Eugene Nikanorov
Public bug reported:

If the code executes the following sequence:

delete_subnet(subnet_id)
delete_network(net_id)

where subnet subnet_id belongs to a network with net_id, there's a
possible race condition between dhcp_port_release initiated by subnet
deletion and deletion of service ports (DHCP) during network deletion.

Latter operation throws PortNotFound and network deletion stops.

The solution could be to just catch such error during port deletion.

** Affects: neutron
 Importance: Medium
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New


** Tags: ml2

** Tags added: ml2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1405197

Title:
  Race between delete_subnet and delete_network

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  If the code executes the following sequence:

  delete_subnet(subnet_id)
  delete_network(net_id)

  where subnet subnet_id belongs to a network with net_id, there's a
  possible race condition between dhcp_port_release initiated by subnet
  deletion and deletion of service ports (DHCP) during network deletion.

  Latter operation throws PortNotFound and network deletion stops.

  The solution could be to just catch such error during port deletion.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1405197/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405041] Re: test report a bug

2014-12-23 Thread Lance Bragstad
** Changed in: keystone
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1405041

Title:
  test report a bug

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  test

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1405041/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405239] [NEW] ML2 Cisco Nexus Cfg not persistent after reboot

2014-12-23 Thread Carol Bouchard
Public bug reported:

Once Ml2 configurations are applied to the Nexus and config is stable,
they are lost when Nexus reboots.

** Affects: neutron
 Importance: Undecided
 Assignee: Carol Bouchard (caboucha)
 Status: New


** Tags: cisco ml2

** Changed in: neutron
 Assignee: (unassigned) = Carol Bouchard (caboucha)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1405239

Title:
  ML2 Cisco Nexus Cfg not persistent after reboot

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Once Ml2 configurations are applied to the Nexus and config is stable,
  they are lost when Nexus reboots.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1405239/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405271] [NEW] nic ordering is inconsistent in backend and ui

2014-12-23 Thread Vish Ishaya
Public bug reported:

The order of nics in both the UI and the backend should be consistent
since it has a strong impact in the guest. The first network shown
should be the first network that the guest is connected to.
Unfortunately we store this info in a dict which has inconsistent
ordering. This should be an ordered dict.

** Affects: nova
 Importance: High
 Assignee: Vish Ishaya (vishvananda)
 Status: In Progress

** Changed in: nova
   Status: New = In Progress

** Changed in: nova
 Assignee: (unassigned) = Vish Ishaya (vishvananda)

** Changed in: nova
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1405271

Title:
  nic ordering is inconsistent in backend and ui

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  The order of nics in both the UI and the backend should be consistent
  since it has a strong impact in the guest. The first network shown
  should be the first network that the guest is connected to.
  Unfortunately we store this info in a dict which has inconsistent
  ordering. This should be an ordered dict.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1405271/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405294] [NEW] Live migration with attached volume peforms breaking rollback on failure

2014-12-23 Thread Alex Meade
Public bug reported:

During live migration with attached volume, nova ignores initialize
connection errors and does not roll back.

Steps:
* Create a nova instance
* Attach a cinder volume
* Perform ‘nova live-migration’ to a different backend
-Cause a failure in the ‘initialize_connection’ call to the new host
* Wait for nova to call ‘terminate_connection’ on the connection to the 
original host

Result:
* Instance remains on original host with Cinder volume attached according to 
Cinder but no longer mapped on the backend. This removes connectivity from 
storage to the host and can cause data loss.


Triage:
What seems to be happening is that Nova is not stopping the migration when 
receiving an error from Cinder and ends up calling terminate_connection for the 
src host when it should not be.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1405294

Title:
  Live migration with attached volume peforms breaking rollback on
  failure

Status in OpenStack Compute (Nova):
  New

Bug description:
  During live migration with attached volume, nova ignores initialize
  connection errors and does not roll back.

  Steps:
  * Create a nova instance
  * Attach a cinder volume
  * Perform ‘nova live-migration’ to a different backend
-Cause a failure in the ‘initialize_connection’ call to the new host
  * Wait for nova to call ‘terminate_connection’ on the connection to the 
original host

  Result:
  * Instance remains on original host with Cinder volume attached according to 
Cinder but no longer mapped on the backend. This removes connectivity from 
storage to the host and can cause data loss.

  
  Triage:
  What seems to be happening is that Nova is not stopping the migration when 
receiving an error from Cinder and ends up calling terminate_connection for the 
src host when it should not be.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1405294/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405311] [NEW] Incorrect check for security groups in create_port

2014-12-23 Thread Salvatore Orlando
Public bug reported:

The check will fail if security groups in the request body are an empty
string.

In http://git.openstack.org/cgit/stackforge/vmware-
nsx/tree/vmware_nsx/neutron/plugins/vmware/plugins/base.py#n1127 the
code should not raise if security groups are an empty list

This causes tempest's smoke and full test suites to fail always

** Affects: neutron
 Importance: Critical
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

** Affects: vmware-nsx
 Importance: Critical
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
   Importance: Undecided = Critical

** Changed in: neutron
Milestone: None = kilo-2

** Changed in: neutron
 Assignee: (unassigned) = Salvatore Orlando (salvatore-orlando)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1405311

Title:
  Incorrect check for security groups in create_port

Status in OpenStack Neutron (virtual network service):
  New
Status in VMware NSX:
  New

Bug description:
  The check will fail if security groups in the request body are an
  empty string.

  In http://git.openstack.org/cgit/stackforge/vmware-
  nsx/tree/vmware_nsx/neutron/plugins/vmware/plugins/base.py#n1127 the
  code should not raise if security groups are an empty list

  This causes tempest's smoke and full test suites to fail always

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1405311/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382440] Re: Detaching multipath volume doesn't work properly when using different targets with same portal for each multipath device

2014-12-23 Thread Keiichi KII
** Changed in: nova
   Status: Fix Released = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1382440

Title:
  Detaching multipath volume doesn't work properly when using different
  targets with same portal for each multipath device

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Overview:
  On Icehouse(2014.1.2) with iscsi_use_multipath=true, detaching iSCSI 
  multipath volume doesn't work properly. When we use different targets(IQNs) 
  associated with same portal for each different multipath device, all of 
  the targets will be deleted via disconnect_volume().

  This problem is not yet fixed in upstream. However, the attached patch
  fixes this problem.

  Steps to Reproduce:

  We can easily reproduce this issue without any special storage
  system in the following Steps:

1. configure iscsi_use_multipath=True in nova.conf on compute node.
2. configure volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
   in cinder.conf on cinder node.
2. create an instance.
3. create 3 volumes and attach them to the instance.
4. detach one of these volumes.
5. check multipath -ll and iscsiadm --mode session.

  Detail:

  This problem was introduced with the following patch which modified
  attaching and detaching volume operations for different targets
  associated with different portals for the same multipath device.

commit 429ac4dedd617f8c1f7c88dd8ece6b7d2f2accd0
Author: Xing Yang xing.y...@emc.com
Date:   Date: Mon Jan 6 17:27:28 2014 -0500

  Fixed a problem in iSCSI multipath

  We found out that:

   # Do a discovery to find all targets.
   # Targets for multiple paths for the same multipath device
   # may not be the same.
   out = self._run_iscsiadm_bare(['-m',
 'discovery',
 '-t',
 'sendtargets',
 '-p',
 iscsi_properties['target_portal']],
 check_exit_code=[0, 255])[0] \
   or 
  
   ips_iqns = self._get_target_portals_from_iscsiadm_output(out)
  ...
   # If no other multipath device attached has the same iqn
   # as the current device
   if not in_use:
   # disconnect if no other multipath devices with same iqn
   self._disconnect_mpath(iscsi_properties, ips_iqns)
   return
   elif multipath_device not in devices:
   # delete the devices associated w/ the unused multipath
   self._delete_mpath(iscsi_properties, multipath_device, ips_iqns)

  When we use different targets(IQNs) associated with same portal for each 
different
  multipath device, the ips_iqns has all targets in compute node from the 
result of
  iscsiadm -m discovery -t sendtargets -p the same portal.
  Then, the _delete_mpath() deletes all of the targets in the ips_iqns
  via /sys/block/sdX/device/delete.

  For example, we create an instance and attach 3 volumes to the
  instance:

# iscsiadm --mode session
tcp: [17] 192.168.0.55:3260,1 
iqn.2010-10.org.openstack:volume-5c526ffa-ba88-4fe2-a570-9e35c4880d12
tcp: [18] 192.168.0.55:3260,1 
iqn.2010-10.org.openstack:volume-b4495e7e-b611-4406-8cce-4681ac1e36de
tcp: [19] 192.168.0.55:3260,1 
iqn.2010-10.org.openstack:volume-b2c01f6a-5723-40e7-9f21-f6b728021b0e
# multipath -ll
330030001 dm-7 IET,VIRTUAL-DISK
size=4.0G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
   `- 23:0:0:1 sdd 8:48 active ready running
330010001 dm-5 IET,VIRTUAL-DISK
size=2.0G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
   `- 21:0:0:1 sdb 8:16 active ready running
330020001 dm-6 IET,VIRTUAL-DISK
size=3.0G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
   `- 22:0:0:1 sdc 8:32 active ready running

  Then we detach one of these volumes:

# nova volume-detach 95f959cd-d180-4063-ae03-9d21dbd7cc50 5c526ffa-
  ba88-4fe2-a570-9e35c4880d12

  As a result of detaching the volume, the compute node remains 3 iSCSI sessions
  and the instance fails to access the attached multipath devices:

# iscsiadm --mode session
tcp: [17] 192.168.0.55:3260,1 
iqn.2010-10.org.openstack:volume-5c526ffa-ba88-4fe2-a570-9e35c4880d12
tcp: [18] 192.168.0.55:3260,1 
iqn.2010-10.org.openstack:volume-b4495e7e-b611-4406-8cce-4681ac1e36de
tcp: [19] 192.168.0.55:3260,1 
iqn.2010-10.org.openstack:volume-b2c01f6a-5723-40e7-9f21-f6b728021b0e
# multipath -ll
330030001 dm-7 ,
size=4.0G features='0' hwhandler='0' wp=rw
`-+-