[Yahoo-eng-team] [Bug 1756507] [NEW] The function _cleanup_running_deleted_instances repeat detach volume

2018-03-17 Thread YaoZheng_ZTE
Public bug reported:

https://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py#n7421,
The volumes already detached during the above _shutdown_instance() call.
So detach is not requested from _cleanup_volumes() in this case. So the
call maybe change to self._cleanup_volumes(context, instance,
bdms,detach=False).

** Affects: nova
 Importance: Undecided
 Assignee: YaoZheng_ZTE (zheng-yao1)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => YaoZheng_ZTE (zheng-yao1)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1756507

Title:
  The function _cleanup_running_deleted_instances repeat detach volume

Status in OpenStack Compute (nova):
  New

Bug description:
  
https://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py#n7421,
  The volumes already detached during the above _shutdown_instance()
  call. So detach is not requested from _cleanup_volumes() in this case.
  So the call maybe change to self._cleanup_volumes(context, instance,
  bdms,detach=False).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1756507/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280522] Re: Replace assertEqual(None, *) with assertIsNone in tests

2016-07-11 Thread YaoZheng_ZTE
** Also affects: quark
   Importance: Undecided
   Status: New

** Changed in: quark
 Assignee: (unassigned) => YaoZheng_ZTE (zheng-yao1)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1280522

Title:
  Replace assertEqual(None, *) with assertIsNone in tests

Status in Anchor:
  Fix Released
Status in anvil:
  New
Status in bifrost:
  Fix Released
Status in Blazar:
  In Progress
Status in Ceilometer:
  New
Status in Cinder:
  Fix Released
Status in congress:
  Fix Released
Status in Designate:
  Fix Released
Status in dox:
  New
Status in DragonFlow:
  New
Status in Freezer:
  New
Status in Glance:
  Fix Released
Status in glance_store:
  Fix Released
Status in heat:
  Fix Released
Status in heat-cfntools:
  Fix Released
Status in Heat Translator:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Ironic:
  Fix Released
Status in ironic-python-agent:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystoneauth:
  Fix Released
Status in kolla-mesos:
  Fix Released
Status in Manila:
  Fix Released
Status in networking-brocade:
  New
Status in networking-cisco:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in octavia:
  Fix Released
Status in ooi:
  Fix Released
Status in os-brick:
  In Progress
Status in os-client-config:
  Fix Released
Status in oslo.messaging:
  New
Status in python-barbicanclient:
  Fix Released
Status in python-ceilometerclient:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-congressclient:
  Fix Released
Status in python-cueclient:
  Fix Released
Status in python-designateclient:
  Fix Released
Status in python-glanceclient:
  Fix Released
Status in python-heatclient:
  Fix Released
Status in python-ironicclient:
  Fix Released
Status in python-manilaclient:
  Fix Released
Status in python-neutronclient:
  Fix Released
Status in python-openstackclient:
  Fix Released
Status in OpenStack SDK:
  Fix Released
Status in python-swiftclient:
  Fix Released
Status in python-troveclient:
  Fix Released
Status in Python client library for Zaqar:
  Fix Released
Status in Quark: Money Reinvented:
  New
Status in Sahara:
  Fix Released
Status in OpenStack Search (Searchlight):
  New
Status in Solum:
  Fix Released
Status in Stackalytics:
  Fix Released
Status in OpenStack Object Storage (swift):
  New
Status in taskflow:
  New
Status in tempest:
  Fix Released
Status in OpenStack DBaaS (Trove):
  Fix Released
Status in tuskar:
  Fix Released
Status in watcher:
  Fix Released
Status in zaqar:
  Fix Released
Status in designate package in Ubuntu:
  Fix Released
Status in python-tuskarclient package in Ubuntu:
  Fix Committed

Bug description:
  Replace assertEqual(None, *) with assertIsNone in tests to have
  more clear messages in case of failure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anchor/+bug/1280522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280522] Re: Replace assertEqual(None, *) with assertIsNone in tests

2016-07-11 Thread YaoZheng_ZTE
** Also affects: anvil
   Importance: Undecided
   Status: New

** Changed in: anvil
 Assignee: (unassigned) => YaoZheng_ZTE (zheng-yao1)

** Also affects: networking-brocade
   Importance: Undecided
   Status: New

** Changed in: networking-brocade
 Assignee: (unassigned) => YaoZheng_ZTE (zheng-yao1)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1280522

Title:
  Replace assertEqual(None, *) with assertIsNone in tests

Status in Anchor:
  Fix Released
Status in anvil:
  New
Status in bifrost:
  Fix Released
Status in Blazar:
  In Progress
Status in Ceilometer:
  New
Status in Cinder:
  Fix Released
Status in congress:
  Fix Released
Status in Designate:
  Fix Released
Status in dox:
  New
Status in DragonFlow:
  New
Status in Freezer:
  New
Status in Glance:
  Fix Released
Status in glance_store:
  Fix Released
Status in heat:
  Fix Released
Status in heat-cfntools:
  Fix Released
Status in Heat Translator:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Ironic:
  Fix Released
Status in ironic-python-agent:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystoneauth:
  Fix Released
Status in kolla-mesos:
  Fix Released
Status in Manila:
  Fix Released
Status in networking-brocade:
  New
Status in networking-cisco:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in octavia:
  Fix Released
Status in ooi:
  Fix Released
Status in os-brick:
  In Progress
Status in os-client-config:
  Fix Released
Status in oslo.messaging:
  New
Status in python-barbicanclient:
  Fix Released
Status in python-ceilometerclient:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-congressclient:
  Fix Released
Status in python-cueclient:
  Fix Released
Status in python-designateclient:
  Fix Released
Status in python-glanceclient:
  Fix Released
Status in python-heatclient:
  Fix Released
Status in python-ironicclient:
  Fix Released
Status in python-manilaclient:
  Fix Released
Status in python-neutronclient:
  Fix Released
Status in python-openstackclient:
  Fix Released
Status in OpenStack SDK:
  Fix Released
Status in python-swiftclient:
  Fix Released
Status in python-troveclient:
  Fix Released
Status in Python client library for Zaqar:
  Fix Released
Status in Quark: Money Reinvented:
  New
Status in Sahara:
  Fix Released
Status in OpenStack Search (Searchlight):
  New
Status in Solum:
  Fix Released
Status in Stackalytics:
  Fix Released
Status in OpenStack Object Storage (swift):
  New
Status in taskflow:
  New
Status in tempest:
  Fix Released
Status in OpenStack DBaaS (Trove):
  Fix Released
Status in tuskar:
  Fix Released
Status in watcher:
  Fix Released
Status in zaqar:
  Fix Released
Status in designate package in Ubuntu:
  Fix Released
Status in python-tuskarclient package in Ubuntu:
  Fix Committed

Bug description:
  Replace assertEqual(None, *) with assertIsNone in tests to have
  more clear messages in case of failure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anchor/+bug/1280522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280522] Re: Replace assertEqual(None, *) with assertIsNone in tests

2016-07-11 Thread YaoZheng_ZTE
** Also affects: freezer
   Importance: Undecided
   Status: New

** Changed in: freezer
 Assignee: (unassigned) => YaoZheng_ZTE (zheng-yao1)

** Also affects: dragonflow
   Importance: Undecided
   Status: New

** Changed in: dragonflow
 Assignee: (unassigned) => YaoZheng_ZTE (zheng-yao1)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1280522

Title:
  Replace assertEqual(None, *) with assertIsNone in tests

Status in Anchor:
  Fix Released
Status in bifrost:
  Fix Released
Status in Blazar:
  In Progress
Status in Cinder:
  Fix Released
Status in congress:
  Fix Released
Status in Designate:
  Fix Released
Status in dox:
  New
Status in DragonFlow:
  New
Status in Freezer:
  New
Status in Glance:
  Fix Released
Status in glance_store:
  Fix Released
Status in heat:
  Fix Released
Status in heat-cfntools:
  Fix Released
Status in Heat Translator:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Ironic:
  Fix Released
Status in ironic-python-agent:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystoneauth:
  Fix Released
Status in kolla-mesos:
  Fix Released
Status in Manila:
  Fix Released
Status in networking-cisco:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in octavia:
  Fix Released
Status in ooi:
  Fix Released
Status in os-brick:
  New
Status in os-client-config:
  Fix Released
Status in python-barbicanclient:
  Fix Released
Status in python-ceilometerclient:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-congressclient:
  Fix Released
Status in python-cueclient:
  Fix Released
Status in python-designateclient:
  Fix Released
Status in python-glanceclient:
  Fix Released
Status in python-heatclient:
  Fix Released
Status in python-ironicclient:
  Fix Released
Status in python-manilaclient:
  Fix Released
Status in python-neutronclient:
  Fix Released
Status in python-openstackclient:
  Fix Released
Status in OpenStack SDK:
  Fix Released
Status in python-swiftclient:
  Fix Released
Status in python-troveclient:
  Fix Released
Status in Python client library for Zaqar:
  Fix Released
Status in Sahara:
  Fix Released
Status in OpenStack Search (Searchlight):
  New
Status in Solum:
  Fix Released
Status in Stackalytics:
  Fix Released
Status in OpenStack Object Storage (swift):
  New
Status in taskflow:
  New
Status in tempest:
  Fix Released
Status in OpenStack DBaaS (Trove):
  Fix Released
Status in tuskar:
  Fix Released
Status in watcher:
  Fix Released
Status in zaqar:
  Fix Released
Status in designate package in Ubuntu:
  Fix Released
Status in python-tuskarclient package in Ubuntu:
  Fix Committed

Bug description:
  Replace assertEqual(None, *) with assertIsNone in tests to have
  more clear messages in case of failure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anchor/+bug/1280522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600109] Re: Unit tests should not perform logging, but some tests still use

2016-07-08 Thread YaoZheng_ZTE
** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

** Changed in: python-neutronclient
 Assignee: (unassigned) => YaoZheng_ZTE (zheng-yao1)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1600109

Title:
  Unit tests should not perform logging,but some tests still use

Status in Ceilometer:
  New
Status in Cinder:
  New
Status in Glance:
  New
Status in glance_store:
  New
Status in OpenStack Identity (keystone):
  New
Status in Magnum:
  New
Status in neutron:
  New
Status in OpenStack Compute (nova):
  New
Status in os-brick:
  New
Status in python-cinderclient:
  New
Status in python-glanceclient:
  New
Status in python-heatclient:
  New
Status in python-keystoneclient:
  New
Status in python-neutronclient:
  New
Status in python-novaclient:
  New
Status in python-rackclient:
  New
Status in python-swiftclient:
  New
Status in rack:
  New
Status in Rally:
  New
Status in OpenStack Object Storage (swift):
  New
Status in tempest:
  New
Status in OpenStack DBaaS (Trove):
  New

Bug description:
  We shuld remove the logging

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1600109/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600109] Re: Unit tests should not perform logging, but some tests still use

2016-07-07 Thread YaoZheng_ZTE
** Also affects: python-glanceclient
   Importance: Undecided
   Status: New

** Changed in: python-glanceclient
 Assignee: (unassigned) => YaoZheng_ZTE (zheng-yao1)

** Also affects: python-cinderclient
   Importance: Undecided
   Status: New

** Changed in: python-cinderclient
 Assignee: (unassigned) => YaoZheng_ZTE (zheng-yao1)

** Also affects: python-novaclient
   Importance: Undecided
   Status: New

** Changed in: python-novaclient
 Assignee: (unassigned) => YaoZheng_ZTE (zheng-yao1)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1600109

Title:
  Unit tests should not perform logging,but some tests still use

Status in Ceilometer:
  New
Status in Cinder:
  New
Status in Glance:
  New
Status in glance_store:
  New
Status in OpenStack Identity (keystone):
  New
Status in neutron:
  New
Status in OpenStack Compute (nova):
  New
Status in os-brick:
  New
Status in python-cinderclient:
  New
Status in python-glanceclient:
  New
Status in python-heatclient:
  New
Status in python-novaclient:
  New
Status in Rally:
  New
Status in OpenStack Object Storage (swift):
  New
Status in tempest:
  New

Bug description:
  We shuld remove the logging

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1600109/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1595773] Re: Make print py3 compatible

2016-07-07 Thread YaoZheng_ZTE
** No longer affects: glance-store

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1595773

Title:
  Make print py3 compatible

Status in daisycloud-core:
  New
Status in Fuel Plugins:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Object Storage (swift):
  In Progress

Bug description:
  In PY3,

  Remove the print "", join the print () function to achieve the same
  function.

  Python 3:

  #!/usr/bin/python
  # -*- coding: utf-8 -*-
  print ("cinder")

  print "cinder"

  
File "code", line 5
  print "cinder"
   ^
  SyntaxError: Missing parentheses in call to 'print'

To manage notifications about this bug go to:
https://bugs.launchpad.net/daisycloud-core/+bug/1595773/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1595773] Re: Make print py3 compatible

2016-06-24 Thread YaoZheng_ZTE
** Project changed: swift-swf => swift

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1595773

Title:
  Make print py3 compatible

Status in Fuel Plugins:
  New
Status in glance_store:
  New
Status in OpenStack Compute (nova):
  In Progress
Status in python-cinderclient:
  New
Status in Rally:
  New
Status in OpenStack Object Storage (swift):
  New

Bug description:
  In PY3,

  Remove the print "", join the print () function to achieve the same
  function.

  Python 3:

  #!/usr/bin/python
  # -*- coding: utf-8 -*-
  print ("cinder")

  print "cinder"

  
File "code", line 5
  print "cinder"
   ^
  SyntaxError: Missing parentheses in call to 'print'

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel-plugins/+bug/1595773/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1595786] Re: Make string.letters PY3 compatible

2016-06-23 Thread YaoZheng_ZTE
** Project changed: nova => python-novaclient

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1595786

Title:
  Make  string.letters PY3 compatible

Status in OpenStack Compute (nova):
  New
Status in python-novaclient:
  New
Status in SWIFT:
  New

Bug description:
  String.letters and related.Lowercase and.Uppercase are removed, please
  switch to string.ascii_letters, etc.

  
  as:
  
https://github.com/openstack/nova/blob/04f2d81bb4d1e26482b613ab799bb38ce304e143/nova/tests/unit/api/openstack/compute/test_console_output.py#L102

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1595786/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259292] Re: Some tests use assertEqual(observed, expected) , the argument order is wrong

2016-06-14 Thread YaoZheng_ZTE
** Also affects: python-glanceclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1259292

Title:
  Some tests use assertEqual(observed, expected) , the argument order is
  wrong

Status in Barbican:
  In Progress
Status in Ceilometer:
  Invalid
Status in Cinder:
  Fix Released
Status in congress:
  Fix Released
Status in Designate:
  Fix Released
Status in Glance:
  Fix Released
Status in heat:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Identity (keystone):
  Fix Released
Status in Magnum:
  Fix Released
Status in Manila:
  Fix Released
Status in Mistral:
  Fix Released
Status in Murano:
  Fix Released
Status in OpenStack Compute (nova):
  Won't Fix
Status in os-brick:
  New
Status in python-ceilometerclient:
  Invalid
Status in python-cinderclient:
  Fix Released
Status in python-designateclient:
  Fix Committed
Status in python-glanceclient:
  New
Status in python-mistralclient:
  Fix Released
Status in python-solumclient:
  Fix Released
Status in Python client library for Zaqar:
  Fix Released
Status in Sahara:
  Fix Released
Status in zaqar:
  Fix Released

Bug description:
  The test cases will produce a confusing error message if the tests
  ever fail, so this is worth fixing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1259292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1592376] [NEW] Cinder driver the function add calculate size_gb need improve

2016-06-14 Thread YaoZheng_ZTE
Public bug reported:

In the line
https://github.com/openstack/glance_store/blob/master/glance_store/_drivers/cinder.py#L436,
Its intent is to get ceiling of size_gb. we can use python math module
math.ceil() function. This can improve the code readability. So i
suggest improve it.

** Affects: glance
 Importance: Undecided
 Assignee: YaoZheng_ZTE (zheng-yao1)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => YaoZheng_ZTE (zheng-yao1)

** Description changed:

- In the line 
https://github.com/openstack/glance_store/blob/master/glance_store/_drivers/cinder.py#L436,
 Its intent is to get ceiling of size_gb. we can use python math module
- math.ceil() function. This can improve the code readability. So i suggest 
improve it.
+ In the line
+ 
https://github.com/openstack/glance_store/blob/master/glance_store/_drivers/cinder.py#L436,
+ Its intent is to get ceiling of size_gb. we can use python math module
+ math.ceil() function. This can improve the code readability. So i
+ suggest improve it.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1592376

Title:
  Cinder driver the function add  calculate size_gb need improve

Status in Glance:
  New

Bug description:
  In the line
  
https://github.com/openstack/glance_store/blob/master/glance_store/_drivers/cinder.py#L436,
  Its intent is to get ceiling of size_gb. we can use python math module
  math.ceil() function. This can improve the code readability. So i
  suggest improve it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1592376/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582558] [NEW] Live-migration exception handling improvement

2016-05-17 Thread YaoZheng_ZTE
Public bug reported:

Description:
1.In /nova/compute/manager.py the function _rollback_live_migration, we should 
be catch exception of 
'remove_volume_connection', let the rollback process continue to clean up other 
resources. Becaue,the
'remove_volume_connection' will visit cinder, the probability of an exception 
is relatively large.
2.In /nova/compute/manager.py the function _post_live_migration, we should be 
catch all exception of 
source host clean up resources. Because the vm has been migrated the dest host, 
we should try to make sure the vm run normally.

** Affects: nova
 Importance: Undecided
 Assignee: YaoZheng_ZTE (zheng-yao1)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => YaoZheng_ZTE (zheng-yao1)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1582558

Title:
  Live-migration exception handling improvement

Status in OpenStack Compute (nova):
  New

Bug description:
  Description:
  1.In /nova/compute/manager.py the function _rollback_live_migration, we 
should be catch exception of 
  'remove_volume_connection', let the rollback process continue to clean up 
other resources. Becaue,the
  'remove_volume_connection' will visit cinder, the probability of an exception 
is relatively large.
  2.In /nova/compute/manager.py the function _post_live_migration, we should be 
catch all exception of 
  source host clean up resources. Because the vm has been migrated the dest 
host, we should try to make sure the vm run normally.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1582558/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582543] [NEW] After pre live-migration failed cannot rollback source connection information

2016-05-16 Thread YaoZheng_ZTE
Public bug reported:

Description:
Boot vm from volume, if when pre_live_migration, the bdm connection_info has 
been updatated as the dest connection_info. So, if pre_live_migration failed, 
the _rollback_live_migration should be updated
the source host connection_info to bdm table. Otherwise, the virtual machine 
migration failure can not work properly.
Steps to reproduce:
1. Boot vm from volume
2. Construction of pre migration failed.
3. Run nova live-migration vm
4. The vm looks like good, But if you hard reboot the vm, the vm will be 
anomaly.

Expected result:
After vm live-migration failed, the vm can be ok.

Actual result:
As for the vm bdm connection_info was updated to the dest information. But the 
virsh process was still in source host. So, the vm's hard-reboot,stop,start 
actions are not ok.

** Affects: nova
 Importance: Undecided
 Assignee: YaoZheng_ZTE (zheng-yao1)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => YaoZheng_ZTE (zheng-yao1)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1582543

Title:
  After pre  live-migration failed  cannot rollback  source connection
  information

Status in OpenStack Compute (nova):
  New

Bug description:
  Description:
  Boot vm from volume, if when pre_live_migration, the bdm connection_info has 
been updatated as the dest connection_info. So, if pre_live_migration failed, 
the _rollback_live_migration should be updated
  the source host connection_info to bdm table. Otherwise, the virtual machine 
migration failure can not work properly.
  Steps to reproduce:
  1. Boot vm from volume
  2. Construction of pre migration failed.
  3. Run nova live-migration vm
  4. The vm looks like good, But if you hard reboot the vm, the vm will be 
anomaly.

  Expected result:
  After vm live-migration failed, the vm can be ok.

  Actual result:
  As for the vm bdm connection_info was updated to the dest information. But 
the virsh process was still in source host. So, the vm's hard-reboot,stop,start 
actions are not ok.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1582543/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582052] Re: glance v1 and v2 image-show api are not compatible

2016-05-16 Thread YaoZheng_ZTE
** Project changed: glance => cinder

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1582052

Title:
  glance v1 and v2  image-show api  are not compatible

Status in Cinder:
  New

Bug description:
  The V1 image-show interface returns the following format:
  

  The V2 image-show interface returns the following format:
  {u'status': u'active', u'tags': [], u'container_format': None, u'min_ram': 0, 
u'updated_at': u'2016-05-10T11:17:10Z', u'visibility': u'private', 
u'image_name': u'test_cirros', u'image_id': 
u'498cc673-7c95-4883-9df7-6850deb8168f', u'file': 
u'/v2/images/c6b906d0-a22e-4d8f-83e6-b54e887d3335/file', u'owner': 
u'55bbe36a87af48c0af04c9204a49a854', u'virtual_size': None, u'id': 
u'c6b906d0-a22e-4d8f-83e6-b54e887d3335', u'size': 0, u'min_disk': 0, u'name': 
u'test_snapshot_zy', u'checksum': None, u'created_at': u'2016-05-10T11:17:10Z', 
u'block_device_mapping': u'[{"guest_format": null, "boot_index": 0, 
"no_device": null, "snapshot_id": "55581528-cb88-4e5a-bee4-d3bb267a93cc", 
"delete_on_termination": null, "disk_bus": "virtio", "image_id": null, 
"source_type": "snapshot", "device_type": "disk", "volume_id": null, 
"destination_type": "volume", "volume_size": null}, {"guest_format": null, 
"boot_index": null, "no_device": null, "snapshot_id": 
"9fcbc64c-1ebc-43d7-b899-6c7a42093697", "delete_on_termination": null, 
"disk_bus": null, "image_id": null, "source_type": "snapshot", "device_type": 
null, "volume_id": null, "destination_type": "volume", "volume_size": null}]', 
u'disk_format': None, u'bdm_v2': u'True', u'protected': False, 
u'root_device_name': u'/dev/vda', u'schema': u'/v2/schemas/image'}

  In V1, 'block_device_mapping' as a key of 'properties'. But There is no 
'properties' in V2.
  So, the cinder and the nova don't Parse 'block_device_mapping'the attribute.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1582052/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582482] [NEW] After live-migration check source disk failed bdm.device_path lost

2016-05-16 Thread YaoZheng_ZTE
uot;ip": "10.43.203.3", "host": 
"2C5_10_DELL05", "multipath": true, "initiator": 
"iqn.opencos.rh:995be1fac333"}, "serial": 
"78b80439-8bea-4900-aa7e-ddf2a9b0b8cf", "data": {"target_luns": [3, 3, 3, 3, 3, 
3, 3], "target_iqns": ["iqn.2099-01.cn.com.zte:usp.spr11-4c:09:b4:b0:55:90", 
"iqn.2099-01.cn.com.zte:usp.spr11-4c:09:b4:b0:55:90", 
"iqn.2099-01.cn.com.zte:usp.spr11-4c:09:b4:b0:55:90", 
"iqn.2099-01.cn.com.zte:usp.spr11-4c:09:b4:b0:55:91", 
"iqn.2099-01.cn.com.zte:usp.spr11-4c:09:b4:b0:55:91", 
"iqn.2099-01.cn.com.zte:usp.spr11-4c:09:b4:b0:55:91", 
"iqn.2099-01.cn.com.zte:usp.spr11-4c:09:b4:b0:55:91"], "device_path": 
"/dev/mapper/mpathdks", "target_discovered": false, "qos_specs": null, 
"target_iqn": "iqn.2099-01.cn.com.zte:usp.spr11-4c:09:b4:b0:55:90", 
"target_portals": ["172.168.102.25:3260", "172.168.103.25:3260", 
"172.168.104.25:3260", "172.168.101.25:3260", "172.168.102.27:3260", 
"172.168.103.27:3260", "172.168.104.27:3260"], "volume_id": "7
 8b80439-8bea-4900-aa7e-ddf2a9b0b8cf", "target_lun": 3, "access_mode": "rw", 
"multipath_id": "mpathdks", "target_portal": "172.168.102.25:3260"}}

After run live-migration, the bdm information as following:
{"driver_volume_type": "iscsi", "connector": {"ip": "10.43.203.3", "host": 
"2C5_10_DELL05", "multipath": true, "initiator": 
"iqn.opencos.rh:995be1fac333"}, "serial": 
"78b80439-8bea-4900-aa7e-ddf2a9b0b8cf", "data": {"target_luns": [3, 3, 3, 3, 3, 
3, 3], "target_iqns": ["iqn.2099-01.cn.com.zte:usp.spr11-4c:09:b4:b0:55:90", 
"iqn.2099-01.cn.com.zte:usp.spr11-4c:09:b4:b0:55:90", 
"iqn.2099-01.cn.com.zte:usp.spr11-4c:09:b4:b0:55:90", 
"iqn.2099-01.cn.com.zte:usp.spr11-4c:09:b4:b0:55:91", 
"iqn.2099-01.cn.com.zte:usp.spr11-4c:09:b4:b0:55:91", 
"iqn.2099-01.cn.com.zte:usp.spr11-4c:09:b4:b0:55:91", 
"iqn.2099-01.cn.com.zte:usp.spr11-4c:09:b4:b0:55:91"], "target_discovered": 
false, "qos_specs": null, "target_iqn": 
"iqn.2099-01.cn.com.zte:usp.spr11-4c:09:b4:b0:55:90", "target_portals": 
["172.168.102.25:3260", "172.168.103.25:3260", "172.168.104.25:3260", 
"172.168.101.25:3260", "172.168.102.27:3260", "172.168.103.27:3260", 
"172.168.104.27:3260"], "volume_id": "78b80439-8bea-4900-aa7e-ddf2a9b0b8cf", "
 target_lun": 3, "access_mode": "rw", "multipath_id": "mpathdks", 
"target_portal": "172.168.102.25:3260"}}
So, the device_path key was lost.

Environment:
I use mitika version.

** Affects: nova
 Importance: Undecided
 Assignee: YaoZheng_ZTE (zheng-yao1)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => YaoZheng_ZTE (zheng-yao1)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1582482

Title:
  After live-migration check source  disk failed bdm.device_path lost

Status in OpenStack Compute (nova):
  New

Bug description:
  Description:
  live-migration vm, the process raised exception in the function 
"check_can_live_migrate_source". Then, the vm looks like running and active. 
But, the bdm.connection_info lost 'device_path' key.
  reproduce:
  1. boot a vm from volume.
  2. In the /nova/virt/libvirt/driver.py  function 
"check_can_live_migrate_source" structure throws an exception.
  3. run nova live-migrate vm.
  the nova-compute log as following:
  2016-05-17 08:56:53.003 11926 ERROR oslo_messaging.rpc.dispatcher 
[req-a29e2ad4-8b28-4075-9c79-966086f99946 95d365ac3e3948c9be554a33855c6e07 
853481fe4d1e4d1eb0136c7ecf46e5e7 - - -] Exception during message handling: 
2C5_10_DELL05 is not on shared storage: Live migration can not be used without 
shared storage.
  2016-05-17 08:56:53.003 11926 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2016-05-17 08:56:53.003 11926 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
  2016-05-17 08:56:53.003 11926 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2016-05-17 08:56:53.003 11926 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/si

[Yahoo-eng-team] [Bug 1582052] [NEW] glance v1 and v2 image-show api are not compatible

2016-05-15 Thread YaoZheng_ZTE
Public bug reported:

The V1 image-show interface returns the following format:


The V2 image-show interface returns the following format:
{u'status': u'active', u'tags': [], u'container_format': None, u'min_ram': 0, 
u'updated_at': u'2016-05-10T11:17:10Z', u'visibility': u'private', 
u'image_name': u'test_cirros', u'image_id': 
u'498cc673-7c95-4883-9df7-6850deb8168f', u'file': 
u'/v2/images/c6b906d0-a22e-4d8f-83e6-b54e887d3335/file', u'owner': 
u'55bbe36a87af48c0af04c9204a49a854', u'virtual_size': None, u'id': 
u'c6b906d0-a22e-4d8f-83e6-b54e887d3335', u'size': 0, u'min_disk': 0, u'name': 
u'test_snapshot_zy', u'checksum': None, u'created_at': u'2016-05-10T11:17:10Z', 
u'block_device_mapping': u'[{"guest_format": null, "boot_index": 0, 
"no_device": null, "snapshot_id": "55581528-cb88-4e5a-bee4-d3bb267a93cc", 
"delete_on_termination": null, "disk_bus": "virtio", "image_id": null, 
"source_type": "snapshot", "device_type": "disk", "volume_id": null, 
"destination_type": "volume", "volume_size": null}, {"guest_format": null, 
"boot_index": null, "no_device": null, "snapshot_id": 
"9fcbc64c-1ebc-43d7-b899-6c7a42093697", "delete_on_termination": null, 
"disk_bus": null, "image_id": null, "source_type": "snapshot", "device_type": 
null, "volume_id": null, "destination_type": "volume", "volume_size": null}]', 
u'disk_format': None, u'bdm_v2': u'True', u'protected': False, 
u'root_device_name': u'/dev/vda', u'schema': u'/v2/schemas/image'}

In V1, 'block_device_mapping' as a key of 'properties'. But There is no 
'properties' in V2.
So, the cinder and the nova don't Parse 'block_device_mapping'the attribute.

** Affects: glance
 Importance: Undecided
 Assignee: YaoZheng_ZTE (zheng-yao1)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => YaoZheng_ZTE (zheng-yao1)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1582052

Title:
  glance v1 and v2  image-show api  are not compatible

Status in Glance:
  New

Bug description:
  The V1 image-show interface returns the following format:
  

  The V2 image-show interface returns the following format:
  {u'status': u'active', u'tags': [], u'container_format': None, u'min_ram': 0, 
u'updated_at': u'2016-05-10T11:17:10Z', u'visibility': u'private', 
u'image_name': u'test_cirros', u'image_id': 
u'498cc673-7c95-4883-9df7-6850deb8168f', u'file': 
u'/v2/images/c6b906d0-a22e-4d8f-83e6-b54e887d3335/file', u'owner': 
u'55bbe36a87af48c0af04c9204a49a854', u'virtual_size': None, u'id': 
u'c6b906d0-a22e-4d8f-83e6-b54e887d3335', u'size': 0, u'min_disk': 0, u'name': 
u'test_snapshot_zy', u'checksum': None, u'created_at': u'2016-05-10T11:17:10Z', 
u'block_device_mapping': u'[{"guest_format": null, "boot_index": 0, 
"no_device": null, "snapshot_id": "55581528-cb88-4e5a-bee4-d3bb267a93cc", 
"delete_on_termination": null, "disk_bus": "virtio", "image_id": null, 
"source_type": "snapshot", "device_type": "disk", "volume_id": null, 
"destination_type": "volume", "volume_size": null}, {"guest_format": null, 
"boot_index": null, "no_device": null, "snapshot_id": 
"9fcbc64c-1ebc-43d7-b899-6c7a42093697", "delete_on_termination": null, 
"disk_bus": null, "image_id": null, "source_type": "snapshot", "device_type": 
null, "volume_id": null, "destination_type": "volume", "volume_size": null}]', 
u'disk_format': None, u'bdm_v2': u'True', u'protected': False, 
u'root_device_name': u'/dev/vda', u'schema': u'/v2/schemas/image'}

  In V1, 'block_device_mapping' as a key of 'properties'. But There is no 
'properties' in V2.
  So, the cinder and the nova don't Parse 'block_device_mapping'the attribute.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1582052/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1425382] Re: A volume was attached to the vm instance twice, the vm instance will cannot normally use this volume

2016-05-09 Thread YaoZheng_ZTE
** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1425382

Title:
  A volume was attached to the vm instance twice, the vm instance will
  cannot normally use this volume

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Reproducing method as following:
  1: create a vm instance named test_vm1
  2:create a volume named test_volume1
  3:run nova command : nova volume-attach   test_vm1   test_volume1 , then 
confirmed the volume attached the instance ok
  [root@opencos_cjl ~(keystone_admin)]# nova list
  
+--+---+++-+---+
  | ID   | Name 
 | Status | Task State | Power State | Networks 
 |
  
+--+---+++-+---+
  | b917e46b-539f-4024-bced-73c6b7c00ea2 | 
TestZteOneVolumeAttatchTo2Servers-instance-1863856957 | ACTIVE | -  | 
Running | zfl_internal_net=192.168.0.107|
  | d0e5f1a4-9da1-4c39-a17d-12e43d20cd10 | 
TestZteOneVolumeAttatchTo2Servers-instance-8729469| ACTIVE | -  | 
Running | zfl_internal_net=192.168.0.108|
  | 9a6c6aff-d77c-4699-a41f-abb9d8e4b09e | test2
 | ACTIVE | -  | Running | 
zfl_internal_net=192.168.0.101, 10.43.210.232 |
  | 4a338d56-0daf-48d8-bcb5-d46de74b3887 | test_vm1 
 | ACTIVE | -  | Running | 
zfl_internal_net=192.168.0.109|
  
+--+---+++-+---+
  [root@opencos_cjl ~(keystone_admin)]# 
  [root@opencos_cjl ~(keystone_admin)]# nova volume-list
  
+--+---+--+--+-+--+
  | ID   | Status| Display Name | Size | 
Volume Type | Attached to  |
  
+--+---+--+--+-+--+
  | 22ad798d-77d2-4031-8b82-a5512e9f9284 | in-use| test_volume1 | 1| 
None| 4a338d56-0daf-48d8-bcb5-d46de74b3887 |
  | 76f708fe-0e47-4f1a-a43b-08001f6a65d9 | available | test | 1| 
None|  |
  
+--+---+--+--+-+--+

  4: run nova command again : nova volume-attach   test_vm1   test_volume1 , 
then will raise exception as following:
 
  [root@opencos_cjl ~(keystone_admin)]# nova volume-attach 
4a338d56-0daf-48d8-bcb5-d46de74b3887  22ad798d-77d2-4031-8b82-a5512e9f9284
  ERROR: Invalid volume: Volume has been attached to the instance (HTTP 400) 
(Request-ID: req-24f8b244-9809-41e4-b8e8-8a5ca7157c1c)
  the exception is correct , but the issuse is: after the step 4, the  test_vm1 
cannot normally use the  test_volume1. If you login the  test_vm1 os, you will 
find the volume attached as /dev/vdb don't work ok.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1425382/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1563209] [NEW] instance's snapshot image can't restore the instance

2016-03-29 Thread YaoZheng_ZTE
Public bug reported:

Reproducing method as following:
1、Boot a vm from the image
2、Create a  blank volume .
3、Attach the volume to the vm .
4、Snapshot the vm.
6、After the  fourth step, you will find that you have a new image in glance.
   We're supposed to call it “vm_snapshot_image”.
7、Boot a new vm from the "vm_snapshot_image", but the new vm's  data volume  
was lost.

** Affects: nova
 Importance: Undecided
 Assignee: YaoZheng_ZTE (zheng-yao1)
 Status: New

** Description changed:

  Reproducing method as following:
- 1、Boot a vm from the image 
+ 1、Boot a vm from the image
  2、Create a  blank volume .
  3、Attach the volume to the vm .
  4、Snapshot the vm.
  6、After the  fourth step, you will find that you have a new image in glance.
-  We're supposed to call it “vm_snapshot_image”.
- 7、Boot a new vm from the "vm_snapshot_image", but the new vm's  data volume 
was lost.
+    We're supposed to call it “vm_snapshot_image”.
+ 7、Boot a new vm from the "vm_snapshot_image", but the new vm's  data volume  
was lost.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1563209

Title:
  instance's snapshot image  can't restore the instance

Status in OpenStack Compute (nova):
  New

Bug description:
  Reproducing method as following:
  1、Boot a vm from the image
  2、Create a  blank volume .
  3、Attach the volume to the vm .
  4、Snapshot the vm.
  6、After the  fourth step, you will find that you have a new image in glance.
     We're supposed to call it “vm_snapshot_image”.
  7、Boot a new vm from the "vm_snapshot_image", but the new vm's  data volume  
was lost.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1563209/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1550639] [NEW] After migrate volume being attached instance, the instance cann't run normally

2016-02-26 Thread YaoZheng_ZTE
g.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 395, in 
decorated_function
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5965, in 
swap_volume
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.rpc.dispatcher 
new_volume_id)
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5932, in 
_swap_volume
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.rpc.dispatcher 
self.volume_api.unreserve_volume(context, new_volume_id)
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5913, in 
_swap_volume
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.rpc.dispatcher resize_to)
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1241, in 
swap_volume
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.rpc.dispatcher 
self._disconnect_volume(old_connection_info, disk_dev)
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1092, in 
_disconnect_volume
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.rpc.dispatcher raise
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.rpc.dispatcher TypeError: 
exceptions must be old-style classes or derived from BaseException, not NoneType
5. Then, the instance is still running ,active. but  login to the virtual 
machine system, find the guest OS  changed read-only file system.

** Affects: nova
 Importance: Undecided
 Assignee: YaoZheng_ZTE (zheng-yao1)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => YaoZheng_ZTE (zheng-yao1)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1550639

Title:
  After migrate volume  being attached instance, the instance cann't
  run normally

Status in OpenStack Compute (nova):
  New

Bug description:
  Reproducing method as following:
  1.create a volume from image 
  [root@2C5_10_DELL05 ~(keystone_admin)]# cinder  create --image-id 
fd8330b3-a307-4140-8fe0-01341b583e26 --name test_image_volume  --volume-type  
KSIP 1 
  
+---+--+
  |Property   |Value
 |
  
+---+--+
  |  attachments  |  [] 
 |
  |   availability_zone   | nova
 |
  |bootable   |false
 |
  |  consistencygroup_id  | None
 |
  |   created_at  |  2016-02-27T04:20:37.00 
 |
  |  description  | None
 |
  |   encrypted   |False
 |
  |   id  | 
a0dae16a-2669-49c7-a118-250c31adc655 |
  |metadata   |  {} 
 |
  |  multiattach  |False
 |
  |  name |  test_image_volume  
 |
  | os-vol-host-attr:host | None
 |
  | os-vol-mig-status-attr:migstat| None
 |
  | os-vol-mig-status-attr:name_id| None
 |
  |  os-vol-tenant-attr:tenant_id |   181a578bc97642f2b9e153bec622f130  
 |
  |   os-volume-replication:driver_data   | None
 |
  | os-volume-replication:extended_status | None
 |
  |   replication_status  |   disabled  
 |
  |  size |  1  
 |
  |  snapshot_id  |   

[Yahoo-eng-team] [Bug 1550250] [NEW] migrate in-use status volume, the volume's "delete_on_termination" flag lost

2016-02-26 Thread YaoZheng_ZTE
Public bug reported:

Reproducing method as following:
1. create a blank volume  named  "test_show"
2. create a vm instance named test and attach volume "test_show".
[root@2C5_10_DELL05 ~(keystone_admin)]# nova boot --flavor 1 --image 
fd8330b3-a307-4140-8fe0-01341b583e26 --block-device-mapping 
vdb=4ee8dc8e-9ebc-4f82-bab1-862ee7866f2f:::1 --nic 
net-id=5c8f7e7a-5a75-48eb-9c68-096278585c18 test
+--+--+
| Property | Value  
  |
+--+--+
| OS-DCF:diskConfig| MANUAL 
  |
| OS-EXT-AZ:availability_zone  | nova   
  |
| OS-EXT-SRV-ATTR:host | -  
  |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -  
  |
| OS-EXT-SRV-ATTR:instance_name| instance-063f  
  |
| OS-EXT-STS:power_state   | 0  
  |
| OS-EXT-STS:task_state| scheduling 
  |
| OS-EXT-STS:vm_state  | building   
  |
| OS-SRV-USG:launched_at   | -  
  |
| OS-SRV-USG:terminated_at | -  
  |
| accessIPv4   |
  |
| accessIPv6   |
  |
| adminPass| 8SGyuuuESf8n   
  |
| autostart| TRUE   
  |
| boot_index_type  |
  |
| config_drive |
  |
| created  | 2016-02-26T09:15:43Z   
  |
| flavor   | m1.tiny (1)
  |
| hostId   |
  |
| id   | 9010a596-d0e7-42e3-a472-d164f02c0e34   
  |
| image| cirros 
(fd8330b3-a307-4140-8fe0-01341b583e26)|
| key_name | -  
  |
| metadata | {} 
  |
| move | TRUE   
  |
| name | test   
  |
| novnc| TRUE   
  |
| os-extended-volumes:volumes_attached | [{"id": 
"4ee8dc8e-9ebc-4f82-bab1-862ee7866f2f"}] |
| priority | 50 
  |
| progress | 0  
  |
| qos  |
  |
| security_groups  | default
  |
| status   | BUILD  
  |
| tenant_id| 181a578bc97642f2b9e153bec622f130   
  |
| updated  | 2016-02-26T09:15:43Z   
  |
| user_id  | 8b34e1ab75024fcba0ea69a6fd0937c3   
  |
+--+--+
3.After step 2 , until  the instance "test"  ok. then, to observe the BDM 
table, the instance using volume's  "delete_on_termination" flag is True.
4.run cinder migrate volume
[root@2C5_10_DELL05 ~(keystone_admin)]# cinder migrate 
4ee8dc8e-9ebc-4f82-bab1-862ee7866f2f 
2C5_10_DELL05@KS3200ISCSIDriver-1#KS3200_IPSAN
5. Until the step 4 migrate volume success. then,to observe the BDM table 
again,the instance using volume's "delete_on_termination" flag is changed to be 
False.

** Affects: nova
 Importance: Undecided
 Assignee: YaoZheng_ZTE (zheng-yao1)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => YaoZheng_ZTE (zheng-yao1)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1550250

Title:
  migrate in-use status volume, 

[Yahoo-eng-team] [Bug 1534083] [NEW] Glance api config file lost the configuration item "filesystem_store_datadir" default value

2016-01-14 Thread YaoZheng_ZTE
Public bug reported:

The config item "filesystem_store_datadir "default value  lost, so
after  install  glance,  users have to manually configure it.

** Affects: glance
 Importance: Undecided
 Assignee: YaoZheng_ZTE (zheng-yao1)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => YaoZheng_ZTE (zheng-yao1)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1534083

Title:
  Glance api config file lost  the configuration item
  "filesystem_store_datadir" default value

Status in Glance:
  New

Bug description:
  The config item "filesystem_store_datadir "default value  lost, so
  after  install  glance,  users have to manually configure it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1534083/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1533949] [NEW] Glance tasks lost configuration item "conversion_format"

2016-01-13 Thread YaoZheng_ZTE
Public bug reported:

issue:
 
Glance convert task  will use configuration item "conversion_format", but the  
configuration item was lost in  the file glance-api.conf  configuration  file .

** Affects: glance
 Importance: Undecided
 Assignee: YaoZheng_ZTE (zheng-yao1)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => YaoZheng_ZTE (zheng-yao1)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1533949

Title:
  Glance tasks  lost configuration item "conversion_format"

Status in Glance:
  New

Bug description:
  issue:
   
  Glance convert task  will use configuration item "conversion_format", but the 
 configuration item was lost in  the file glance-api.conf  configuration  file .

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1533949/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1533536] [NEW] glance api v2 cannot check image checksum

2016-01-13 Thread YaoZheng_ZTE
Public bug reported:

Issue:

I cannot create an image by Glance V2 API, specifying parameter "checksum". 
the checksum is important for user,  will  can be used  to  check the integrity 
of the image.
but now, the glance V2 API  in create image interface cannot specifying 
parameter "checksum", and can not check the 'checksum'.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1533536

Title:
  glance api v2 cannot check image checksum

Status in Glance:
  New

Bug description:
  Issue:

  I cannot create an image by Glance V2 API, specifying parameter "checksum". 
  the checksum is important for user,  will  can be used  to  check the 
integrity of the image.
  but now, the glance V2 API  in create image interface cannot specifying 
parameter "checksum", and can not check the 'checksum'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1533536/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1402504] Re: Live migration of volume backed instances broken because bug 1288039 changed

2015-12-18 Thread YaoZheng_ZTE
Hi  Sean:
Through the test, the latest version of K, live migration  function is 
still a few problems, can I use this bug to contribute to the community?

** Changed in: nova
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1402504

Title:
  Live migration of volume backed instances broken because bug 1288039
  changed

Status in OpenStack Compute (nova):
  New

Bug description:
  1、in /nova/compute/manager.py  pre_live_migration function , the table of 
block_device_mapping has been updated the destination host's target LUN id .
  2、in /nova/compute/manager.py  post_live_migration_at_destination function 
,when  cleanup source host's  target LUN , first query  target LUN id from 
table of  block_device_mapping, but at the above step , the  target LUN id has 
been updated as  the destination host's target LUN id. so ,when the source and 
the destination  target LUN id are not same , error will occur.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1402504/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477368] [NEW] create volume from snapshot , the request body is not correct

2015-07-22 Thread YaoZheng_ZTE
Public bug reported:

How to reproduce:
1. create a new  blank volume 
2. Snapshot volume which was created at step1
3. Into volume snapshots page, create another volume  source from  the snapshot 
 which was created at step2. 
4. open cinder-api debug log, you can find the horizon send the request message 
body:
 [root@compuer03 cinder]# cat cinder-api.log |grep "Create volume request body"
2015-07-22 10:32:22.093 29454 DEBUG cinder.api.v2.volumes 
[req-9598a2cc-5d9a-4399-99bb-477bef580dc4 150ca453945849e8b79643c1da0c6e97 
51852671947346fead3cdc9ec5f69937 - - -] Create volume request body: {u'volume': 
{u'status': u'creating', u'user_id': None, u'description': u'', u'imageRef': 
None, u'availability_zone': None, 'scheduler_hints': {}, u'attach_status': 
u'detached', u'source_volid': None, u'name': u'333', u'metadata': {}, 
u'consistencygroup_id': None, u'volume_type': u'FCsan', u'snapshot_id': None, 
u'project_id': None, u'source_replica': None, u'size': 1}}

so, you will find the "u'snapshot_id': None," , so the volume source 
snapshot_id was lost.
5. I reproduce this issue in K version.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1477368

Title:
  create volume from snapshot ,the request body is not correct

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  How to reproduce:
  1. create a new  blank volume 
  2. Snapshot volume which was created at step1
  3. Into volume snapshots page, create another volume  source from  the 
snapshot  which was created at step2. 
  4. open cinder-api debug log, you can find the horizon send the request 
message body:
   [root@compuer03 cinder]# cat cinder-api.log |grep "Create volume request 
body"
  2015-07-22 10:32:22.093 29454 DEBUG cinder.api.v2.volumes 
[req-9598a2cc-5d9a-4399-99bb-477bef580dc4 150ca453945849e8b79643c1da0c6e97 
51852671947346fead3cdc9ec5f69937 - - -] Create volume request body: {u'volume': 
{u'status': u'creating', u'user_id': None, u'description': u'', u'imageRef': 
None, u'availability_zone': None, 'scheduler_hints': {}, u'attach_status': 
u'detached', u'source_volid': None, u'name': u'333', u'metadata': {}, 
u'consistencygroup_id': None, u'volume_type': u'FCsan', u'snapshot_id': None, 
u'project_id': None, u'source_replica': None, u'size': 1}}

  so, you will find the "u'snapshot_id': None," , so the volume source 
snapshot_id was lost.
  5. I reproduce this issue in K version.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1477368/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471167] [NEW] A volume attached one instance not working properly in K version

2015-07-03 Thread YaoZheng_ZTE
Public bug reported:

Reproducing method as following:
1、create one instance

[root@opencosf0ccfb2525a94ffa814d647f08e4d6a4 ~(keystone_admin)]# nova list
+--+-+++-+---+
| ID   | Name| Status | Task State | Power 
State | Networks  |
+--+-+++-+---+
| dc7c8242-9e02-4acf-9ae4-08030380e629 | test_zy | ACTIVE | -  | 
Running | net=192.168.0.111 |
+--+-+++-+---+
2、run "nova volume-attach instance_id  volume_id ".

3、after step2, the volume attached the instance successfuly.

4、run "nova volume-attach instance_id  volume_id ", you will find the exception 
as following:
[root@opencosf0ccfb2525a94ffa814d647f08e4d6a4 ~(keystone_admin)]# nova 
volume-attach  dc7c8242-9e02-4acf-9ae4-08030380e629  
1435df8a-c4d6-4993-a0fd-4f57de66a28e
ERROR (BadRequest): Invalid volume: volume 
'1435df8a-c4d6-4993-a0fd-4f57de66a28e' status must be 'available'. Currently in 
'in-use' (HTTP 400) (Request-ID: req-45902cbb-1f00-432f-bfbf-b041bdcc2695)

5、Execute command : nova reboot --hard  ,
  then login to the instance , you will find the volume attached as /dev/vdb 
don't work ok

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1471167

Title:
  A volume attached one instance not working properly in K version

Status in OpenStack Compute (Nova):
  New

Bug description:
  Reproducing method as following:
  1、create one instance

  [root@opencosf0ccfb2525a94ffa814d647f08e4d6a4 ~(keystone_admin)]# nova list
  
+--+-+++-+---+
  | ID   | Name| Status | Task State | 
Power State | Networks  |
  
+--+-+++-+---+
  | dc7c8242-9e02-4acf-9ae4-08030380e629 | test_zy | ACTIVE | -  | 
Running | net=192.168.0.111 |
  
+--+-+++-+---+
  2、run "nova volume-attach instance_id  volume_id ".

  3、after step2, the volume attached the instance successfuly.

  4、run "nova volume-attach instance_id  volume_id ", you will find the 
exception as following:
  [root@opencosf0ccfb2525a94ffa814d647f08e4d6a4 ~(keystone_admin)]# nova 
volume-attach  dc7c8242-9e02-4acf-9ae4-08030380e629  
1435df8a-c4d6-4993-a0fd-4f57de66a28e
  ERROR (BadRequest): Invalid volume: volume 
'1435df8a-c4d6-4993-a0fd-4f57de66a28e' status must be 'available'. Currently in 
'in-use' (HTTP 400) (Request-ID: req-45902cbb-1f00-432f-bfbf-b041bdcc2695)

  5、Execute command : nova reboot --hard  ,
then login to the instance , you will find the volume attached as /dev/vdb 
don't work ok

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1471167/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470093] [NEW] The function "_get_multipath_iqn" get iqn is not complete

2015-06-30 Thread YaoZheng_ZTE
Public bug reported:

1. As for SAN storage has not only one iqn. so, one multipath device will have 
not only one iqn.
2、the function as follow:
def _get_multipath_iqn(self, multipath_device):
entries = self._get_iscsi_devices()
for entry in entries:
entry_real_path = os.path.realpath("/dev/disk/by-path/%s" % entry)
entry_multipath = self._get_multipath_device_name(entry_real_path)
if entry_multipath == multipath_device:
return entry.split("iscsi-")[1].split("-lun")[0]
return None
so, if the multipath_device match one device, will return. but return only one 
iqn. 
but the issue is the multipath_device will contain several single device. as 
following:

[root@R4300G2-ctrl02 ~]# ll /dev/disk/by-path/
lrwxrwxrwx 1 root root  9 Jun 30 14:45 
ip-172.12.1.1:3260-iscsi-iqn.2099-01.cn.com.zte:usp.spr-a0:c0:00:00:00:53-lun-1 
-> ../../sds
lrwxrwxrwx 1 root root  9 Jun 30 14:45 
ip-172.12.2.1:3260-iscsi-iqn.2099-01.cn.com.zte:usp.spr-a0:c0:00:00:00:53-lun-1 
-> ../../sdl
lrwxrwxrwx 1 root root  9 Jun 30 14:45 
ip-172.12.1.2:3260-iscsi-iqn.2099-01.cn.com.zte:usp.spr-4c:09:b4:00:00:00-lun-1 
-> ../../sdo
lrwxrwxrwx 1 root root  9 Jun 30 14:45 
ip-172.12.2.2:3260-iscsi-iqn.2099-01.cn.com.zte:usp.spr-4c:09:b4:00:00:00-lun-1 
-> ../../sdm
so the device have two different 
iqns.(-iqn.2099-01.cn.com.zte:usp.spr-4c:09:b4:00:00:00, 
iqn.2099-01.cn.com.zte:usp.spr-a0:c0:00:00:00:53)

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1470093

Title:
  The function "_get_multipath_iqn" get iqn is not complete

Status in OpenStack Compute (Nova):
  New

Bug description:
  1. As for SAN storage has not only one iqn. so, one multipath device will 
have not only one iqn.
  2、the function as follow:
  def _get_multipath_iqn(self, multipath_device):
  entries = self._get_iscsi_devices()
  for entry in entries:
  entry_real_path = os.path.realpath("/dev/disk/by-path/%s" % entry)
  entry_multipath = self._get_multipath_device_name(entry_real_path)
  if entry_multipath == multipath_device:
  return entry.split("iscsi-")[1].split("-lun")[0]
  return None
  so, if the multipath_device match one device, will return. but return only 
one iqn. 
  but the issue is the multipath_device will contain several single device. as 
following:

  [root@R4300G2-ctrl02 ~]# ll /dev/disk/by-path/
  lrwxrwxrwx 1 root root  9 Jun 30 14:45 
ip-172.12.1.1:3260-iscsi-iqn.2099-01.cn.com.zte:usp.spr-a0:c0:00:00:00:53-lun-1 
-> ../../sds
  lrwxrwxrwx 1 root root  9 Jun 30 14:45 
ip-172.12.2.1:3260-iscsi-iqn.2099-01.cn.com.zte:usp.spr-a0:c0:00:00:00:53-lun-1 
-> ../../sdl
  lrwxrwxrwx 1 root root  9 Jun 30 14:45 
ip-172.12.1.2:3260-iscsi-iqn.2099-01.cn.com.zte:usp.spr-4c:09:b4:00:00:00-lun-1 
-> ../../sdo
  lrwxrwxrwx 1 root root  9 Jun 30 14:45 
ip-172.12.2.2:3260-iscsi-iqn.2099-01.cn.com.zte:usp.spr-4c:09:b4:00:00:00-lun-1 
-> ../../sdm
  so the device have two different 
iqns.(-iqn.2099-01.cn.com.zte:usp.spr-4c:09:b4:00:00:00, 
iqn.2099-01.cn.com.zte:usp.spr-a0:c0:00:00:00:53)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1470093/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470087] [NEW] stop glance-api service will raise exception

2015-06-30 Thread YaoZheng_ZTE
Public bug reported:

1. In redhat system,  run systemctl stop  openstack-glance-api.service  will 
stop glance API service .
2. after step 1, the glance api log as follow:
2015-06-18 08:58:47.538 11453 CRITICAL glance [-] OSError: [Errno 38] Function 
not implemented
2015-06-18 08:58:47.538 11453 TRACE glance Traceback (most recent call last):
2015-06-18 08:58:47.538 11453 TRACE glance   File "/usr/bin/glance-api", line 
10, in 
2015-06-18 08:58:47.538 11453 TRACE glance sys.exit(main())
2015-06-18 08:58:47.538 11453 TRACE glance   File 
"/usr/lib/python2.7/site-packages/glance/cmd/api.py", line 90, in main
2015-06-18 08:58:47.538 11453 TRACE glance server.wait()
2015-06-18 08:58:47.538 11453 TRACE glance   File 
"/usr/lib/python2.7/site-packages/glance/common/wsgi.py", line 406, in wait
2015-06-18 08:58:47.538 11453 TRACE glance self.wait_on_children()
2015-06-18 08:58:47.538 11453 TRACE glance   File 
"/usr/lib/python2.7/site-packages/glance/common/wsgi.py", line 345, in 
wait_on_children
2015-06-18 08:58:47.538 11453 TRACE glance pid, status = os.wait()
2015-06-18 08:58:47.538 11453 TRACE glance   File 
"/usr/lib/python2.7/site-packages/eventlet/green/os.py", line 78, in wait
2015-06-18 08:58:47.538 11453 TRACE glance return waitpid(0, 0)
2015-06-18 08:58:47.538 11453 TRACE glance   File 
"/usr/lib/python2.7/site-packages/eventlet/green/os.py", line 96, in waitpid
2015-06-18 08:58:47.538 11453 TRACE glance greenthread.sleep(0.01)
2015-06-18 08:58:47.538 11453 TRACE glance   File 
"/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 34, in sleep
2015-06-18 08:58:47.538 11453 TRACE glance hub.switch()
2015-06-18 08:58:47.538 11453 TRACE glance   File 
"/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 294, in switch
2015-06-18 08:58:47.538 11453 TRACE glance return self.greenlet.switch()
2015-06-18 08:58:47.538 11453 TRACE glance   File 
"/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 346, in run
2015-06-18 08:58:47.538 11453 TRACE glance self.wait(sleep_time)
2015-06-18 08:58:47.538 11453 TRACE glance   File 
"/usr/lib/python2.7/site-packages/eventlet/hubs/poll.py", line 82, in wait
2015-06-18 08:58:47.538 11453 TRACE glance sleep(seconds)
2015-06-18 08:58:47.538 11453 TRACE glance   File 
"/usr/lib/python2.7/site-packages/glance/common/wsgi.py", line 287, in 
kill_children
2015-06-18 08:58:47.538 11453 TRACE glance os.killpg(self.pgid, 
signal.SIGTERM)
2015-06-18 08:58:47.538 11453 TRACE glance OSError: [Errno 38] Function not 
implemented
2015-06-18 08:58:47.538 11453 TRACE glance 

3.I use icehouse2014.1.3 version, but I review code in K version, this
issue is also present

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1470087

Title:
  stop glance-api service will raise exception

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  1. In redhat system,  run systemctl stop  openstack-glance-api.service  will 
stop glance API service .
  2. after step 1, the glance api log as follow:
  2015-06-18 08:58:47.538 11453 CRITICAL glance [-] OSError: [Errno 38] 
Function not implemented
  2015-06-18 08:58:47.538 11453 TRACE glance Traceback (most recent call last):
  2015-06-18 08:58:47.538 11453 TRACE glance   File "/usr/bin/glance-api", line 
10, in 
  2015-06-18 08:58:47.538 11453 TRACE glance sys.exit(main())
  2015-06-18 08:58:47.538 11453 TRACE glance   File 
"/usr/lib/python2.7/site-packages/glance/cmd/api.py", line 90, in main
  2015-06-18 08:58:47.538 11453 TRACE glance server.wait()
  2015-06-18 08:58:47.538 11453 TRACE glance   File 
"/usr/lib/python2.7/site-packages/glance/common/wsgi.py", line 406, in wait
  2015-06-18 08:58:47.538 11453 TRACE glance self.wait_on_children()
  2015-06-18 08:58:47.538 11453 TRACE glance   File 
"/usr/lib/python2.7/site-packages/glance/common/wsgi.py", line 345, in 
wait_on_children
  2015-06-18 08:58:47.538 11453 TRACE glance pid, status = os.wait()
  2015-06-18 08:58:47.538 11453 TRACE glance   File 
"/usr/lib/python2.7/site-packages/eventlet/green/os.py", line 78, in wait
  2015-06-18 08:58:47.538 11453 TRACE glance return waitpid(0, 0)
  2015-06-18 08:58:47.538 11453 TRACE glance   File 
"/usr/lib/python2.7/site-packages/eventlet/green/os.py", line 96, in waitpid
  2015-06-18 08:58:47.538 11453 TRACE glance greenthread.sleep(0.01)
  2015-06-18 08:58:47.538 11453 TRACE glance   File 
"/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 34, in sleep
  2015-06-18 08:58:47.538 11453 TRACE glance hub.switch()
  2015-06-18 08:58:47.538 11453 TRACE glance   File 
"/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 294, in switch
  2015-06-18 08:58:47.538 11453 TRACE glance return self.greenlet.switch()
  2015-06-18 08:58:4

[Yahoo-eng-team] [Bug 1456007] [NEW] The function _delete_mpath has no valid parameter

2015-05-17 Thread YaoZheng_ZTE
Public bug reported:

the function /nova/virt/libvirt/volume.py  def _delete_mpath(self,
iscsi_properties, multipath_device, ips_iqns):   the multipath_device
parameter has  been not  used .so the parameter "multipath_device" no
need. we should remove the parameter "multipath_device".

This problem also exists in the K version.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: nova-compute

** Tags added: nova-compute

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1456007

Title:
  The function _delete_mpath has no valid parameter

Status in OpenStack Compute (Nova):
  New

Bug description:
  the function /nova/virt/libvirt/volume.py  def _delete_mpath(self,
  iscsi_properties, multipath_device, ips_iqns):   the multipath_device
  parameter has  been not  used .so the parameter "multipath_device" no
  need. we should remove the parameter "multipath_device".

  This problem also exists in the K version.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1456007/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427179] [NEW] boot from volume instance failed, because when reschedule delete the volume

2015-03-02 Thread YaoZheng_ZTE
Public bug reported:


1. Create a volume "nova volume-create --display-name test_volume 1"
[root@controller51 nova(keystone_admin)]# nova volume-list
+--+---+-+--+-+---+
| ID   | Status| Display Name| 
Size | Volume Type | Attached to
   |
+--+---+-+--+-+---+
| a740ca7b-6881-4e28-9fdb-eb0d80336757 | available | test_volume | 
1| None|
   |
| 1f1c19c7-a5f9-4683-a1f6-e339f02e1410 | in-use| NFVO_system_disk2   | 
30   | None| 6fa391f8-bd8b-483d-9286-3cebc9a93d55   
   |
| d868710e-30d4-4095-bd8f-fea9f16fe8ea | in-use| NFVO_data_software_disk | 
30   | None| 
a07abdd5-07a6-4b41-a285-9b825f7b5623;6fa391f8-bd8b-483d-9286-3cebc9a93d55 |
| b03a39ca-ebc1-4472-9a04-58014e67b37c | in-use| NFVO_system_disk1   | 
30   | None| a07abdd5-07a6-4b41-a285-9b825f7b5623   
   |
+--+---+-+--+-+---+
2. use The following command will boot a new instance and attach a volume at 
the same time:
[root@controller51 nova(keystone_admin)]#  nova boot --flavor 1 --image 
1736471c-3530-49f2-ad34-6ef7da285050 --block-device-mapping 
vdb=a740ca7b-6881-4e28-9fdb-eb0d80336757:blank:1:1 --nic 
net-id=31fce69e-16b9-4114-9fa9-589763e58fb0 test
+--+---+
| Property | Value  
   |
+--+---+
| OS-DCF:diskConfig| MANUAL 
   |
| OS-EXT-AZ:availability_zone  | nova   
   |
| OS-EXT-SRV-ATTR:host | -  
   |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -  
   |
| OS-EXT-SRV-ATTR:instance_name| instance-0082  
   |
| OS-EXT-STS:power_state   | 0  
   |
| OS-EXT-STS:task_state| scheduling 
   |
| OS-EXT-STS:vm_state  | building   
   |
| OS-SRV-USG:launched_at   | -  
   |
| OS-SRV-USG:terminated_at | -  
   |
| accessIPv4   |
   |
| accessIPv6   |
   |
| adminPass| sWTuKqzrpS32   
   |
| config_drive |
   |
| created  | 2015-03-02T11:34:29Z   
   |
| flavor   | m1.tiny (1)
   |
| hostId   |
   |
| id   | 868cfd12-eb36-4140-b7b3-98cfcec627cd   
   |
| image| 
VMB_X86_64_LX_2.6.32_64_REL_2014_12_26.img 
(1736471c-3530-49f2-ad34-6ef7da285050) |
| key_name | -  
   |
| metadata | {} 
   |
| name | test

[Yahoo-eng-team] [Bug 1425382] [NEW] A volume was attached to the vm instance twice, the vm instance will cannot normally use this volume

2015-02-24 Thread YaoZheng_ZTE
Public bug reported:

Reproducing method as following:
1: create a vm instance named test_vm1
2:create a volume named test_volume1
3:run nova command : nova volume-attach   test_vm1   test_volume1 , then 
confirmed the volume attached the instance ok
[root@opencos_cjl ~(keystone_admin)]# nova list
+--+---+++-+---+
| ID   | Name   
   | Status | Task State | Power State | Networks   
   |
+--+---+++-+---+
| b917e46b-539f-4024-bced-73c6b7c00ea2 | 
TestZteOneVolumeAttatchTo2Servers-instance-1863856957 | ACTIVE | -  | 
Running | zfl_internal_net=192.168.0.107|
| d0e5f1a4-9da1-4c39-a17d-12e43d20cd10 | 
TestZteOneVolumeAttatchTo2Servers-instance-8729469| ACTIVE | -  | 
Running | zfl_internal_net=192.168.0.108|
| 9a6c6aff-d77c-4699-a41f-abb9d8e4b09e | test2  
   | ACTIVE | -  | Running | 
zfl_internal_net=192.168.0.101, 10.43.210.232 |
| 4a338d56-0daf-48d8-bcb5-d46de74b3887 | test_vm1   
   | ACTIVE | -  | Running | 
zfl_internal_net=192.168.0.109|
+--+---+++-+---+
[root@opencos_cjl ~(keystone_admin)]# 
[root@opencos_cjl ~(keystone_admin)]# nova volume-list
+--+---+--+--+-+--+
| ID   | Status| Display Name | Size | 
Volume Type | Attached to  |
+--+---+--+--+-+--+
| 22ad798d-77d2-4031-8b82-a5512e9f9284 | in-use| test_volume1 | 1| None 
   | 4a338d56-0daf-48d8-bcb5-d46de74b3887 |
| 76f708fe-0e47-4f1a-a43b-08001f6a65d9 | available | test | 1| None 
   |  |
+--+---+--+--+-+--+

4: run nova command again : nova volume-attach   test_vm1   test_volume1 , then 
will raise exception as following:
   
[root@opencos_cjl ~(keystone_admin)]# nova volume-attach 
4a338d56-0daf-48d8-bcb5-d46de74b3887  22ad798d-77d2-4031-8b82-a5512e9f9284
ERROR: Invalid volume: Volume has been attached to the instance (HTTP 400) 
(Request-ID: req-24f8b244-9809-41e4-b8e8-8a5ca7157c1c)
the exception is correct , but the issuse is: after the step 4, the  test_vm1 
cannot normally use the  test_volume1. If you login the  test_vm1 os, you will 
find the volume attached as /dev/vdb don't work ok.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1425382

Title:
  A volume was attached to the vm instance twice, the vm instance will
  cannot normally use this volume

Status in OpenStack Compute (Nova):
  New

Bug description:
  Reproducing method as following:
  1: create a vm instance named test_vm1
  2:create a volume named test_volume1
  3:run nova command : nova volume-attach   test_vm1   test_volume1 , then 
confirmed the volume attached the instance ok
  [root@opencos_cjl ~(keystone_admin)]# nova list
  
+--+---+++-+---+
  | ID   | Name 
 | Status | Task State | Power State | Networks 
 |
  
+--+---+++-+---+
  | b917e46b-539f-4024-bced-73c6b7c00ea2 | 
TestZteOneVolumeAttatchTo2Servers-instance-1863856957 | ACTIVE | -  | 
Running | zfl_internal_net=192.168.0.107|
  | d0e5f1a4-9da1-4c39-a17d-12e43d20cd10 | 
TestZteOneVolumeAttatchTo2Servers-instance-8729469| ACTIVE | -  | 
Running | zfl_internal_net=192.168.0.108|
  | 9a6c6aff-d77c-4699-a41f-abb9d8e4b09e | test2
 | ACTIVE | -  | Running | 
zfl_internal

[Yahoo-eng-team] [Bug 1415001] [NEW] when Live migration instances failed, but can not rollback correct

2015-01-27 Thread YaoZheng_ZTE
Public bug reported:

 1、create  volume backed instance 
2、 at /nova/virt/libvirt/driver.py  _live_migration  function, if in this 
function raise exception, will run  
  the function recover_method(context, instance, dest, block_migration) , but 
the param "migrate_data"
  is None by defaut, so when call 
/nova/compute/manager.py   the function _rollback_live_migration(self, context, 
instance, dest, block_migration, migrate_data=None) , because the param " 
migrate_data " is None ,so the variable "is_volume_backed" is False, 
so the function rollback_live_migration_at_destination(self, context, instance) 
will can not run . so ,the destination resources can not rollback.

3、I am using icehouse version , This bug are also present in the J
version and K version .

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1415001

Title:
  when Live migration  instances failed,but can not rollback correct

Status in OpenStack Compute (Nova):
  New

Bug description:
   1、create  volume backed instance 
  2、 at /nova/virt/libvirt/driver.py  _live_migration  function, if in this 
function raise exception, will run  
the function recover_method(context, instance, dest, block_migration) , but 
the param "migrate_data"
is None by defaut, so when call 
  /nova/compute/manager.py   the function _rollback_live_migration(self, 
context, instance, dest, block_migration, migrate_data=None) , because the 
param " migrate_data " is None ,so the variable "is_volume_backed" is False, 
  so the function rollback_live_migration_at_destination(self, context, 
instance) will can not run . so ,the destination resources can not rollback.

  3、I am using icehouse version , This bug are also present in the J
  version and K version .

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1415001/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1406161] [NEW] boot from fcsan volume live migration caused bdm table lost connection_info value

2014-12-28 Thread YaoZheng_ZTE
Public bug reported:

this issue reproduce steps:
1. nova boot vm1 base fcsan volume, this instance bdm table connection_info as 
following:
{"driver_volume_type": "fibre_channel", "serial": 
"63013f85-34fd-4040-a234-6ee57e1a8eeb", "data": {"device_path": "/dev/dm-11", 
"target_discovered": false, "devices": [{"device": "/dev/sdv", "host": "7", 
"id": "0", "channel": "0", "lun": "8"}, {"device": "/dev/sdx", "host": "8", 
"id": "0", "channel": "0", "lun": "8"}], "qos_specs": null, "volume_id": 
"63013f85-34fd-4040-a234-6ee57e1a8eeb", "target_lun": 8, "access_mode": "rw", 
"target_wwn": ["20014c09b4b04d0e", "20024c09b4b04d0e", "20034c09b4b04d0e", 
"20044c09b4b04d0e", "20014c09b4b04d18", "20024c09b4b04d18", "20034c09b4b04d18", 
"20044c09b4b04d18"], "multipath_id": "3500422790776c4d6"}}
2. run nova live-migration vm1 fininshed, vm1 was migrated another host, but 
this instance bdm table connection_info as following:
{"driver_volume_type": "fibre_channel", "serial": 
"63013f85-34fd-4040-a234-6ee57e1a8eeb", "data": { "target_discovered": false,  
"qos_specs": null, "volume_id": "63013f85-34fd-4040-a234-6ee57e1a8eeb", 
"target_lun": 9, "access_mode": "rw", "target_wwn": ["20014c09b4b04d0e", 
"20024c09b4b04d0e", "20034c09b4b04d0e", "20044c09b4b04d0e", "20014c09b4b04d18", 
"20024c09b4b04d18", "20034c09b4b04d18", "20044c09b4b04d18"], "multipath_id": 
"3500422790776c4d6"}}

we can find the key "device_path" and "devices"  was lost
3.if we live-migration this vm1 again,will fail . Call chain are as follows:
2014-12-16 05:19:12.039 9678 INFO nova.compute.manager [-] [instance: 
5901a64a-32a6-448d-a535-5b03bb4df533] _post_live_migration() is started..
2014-12-16 05:19:12.068 9678 INFO urllib3.connectionpool [-] Starting new HTTP 
connection (1): 10.43.177.244
2014-12-16 05:19:12.139 9678 INFO nova.compute.manager [-] [instance: 
5901a64a-32a6-448d-a535-5b03bb4df533] During sync_power_state the instance has 
a pending task. Skip.
2014-12-16 05:19:12.512 9678 ERROR nova.openstack.common.loopingcall [-] in 
fixed duration looping call
2014-12-16 05:19:12.512 9678 TRACE nova.openstack.common.loopingcall Traceback 
(most recent call last):
2014-12-16 05:19:12.512 9678 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/loopingcall.py", line 
78, in _inner
2014-12-16 05:19:12.512 9678 TRACE nova.openstack.common.loopingcall 
self.f(*self.args, **self.kw)
2014-12-16 05:19:12.512 9678 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4987, in 
wait_for_live_migration
2014-12-16 05:19:12.512 9678 TRACE nova.openstack.common.loopingcall 
migrate_data)
2014-12-16 05:19:12.512 9678 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped
2014-12-16 05:19:12.512 9678 TRACE nova.openstack.common.loopingcall 
payload)
2014-12-16 05:19:12.512 9678 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
2014-12-16 05:19:12.512 9678 TRACE nova.openstack.common.loopingcall 
six.reraise(self.type_, self.value, self.tb)
2014-12-16 05:19:12.512 9678 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped
2014-12-16 05:19:12.512 9678 TRACE nova.openstack.common.loopingcall return 
f(self, context, *args, **kw)
2014-12-16 05:19:12.512 9678 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 369, in 
decorated_function
2014-12-16 05:19:12.512 9678 TRACE nova.openstack.common.loopingcall e, 
sys.exc_info())
2014-12-16 05:19:12.512 9678 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
2014-12-16 05:19:12.512 9678 TRACE nova.openstack.common.loopingcall 
six.reraise(self.type_, self.value, self.tb)
2014-12-16 05:19:12.512 9678 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 356, in 
decorated_function
2014-12-16 05:19:12.512 9678 TRACE nova.openstack.common.loopingcall return 
function(self, context, *args, **kwargs)
2014-12-16 05:19:12.512 9678 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4824, in 
_post_live_migration
2014-12-16 05:19:12.512 9678 TRACE nova.openstack.common.loopingcall 
migrate_data)
2014-12-16 05:19:12.512 9678 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5188, in 
post_live_migration
2014-12-16 05:19:12.512 9678 TRACE nova.openstack.common.loopingcall 
disk_dev)
2014-12-16 05:19:12.512 9678 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1213

[Yahoo-eng-team] [Bug 1402535] [NEW] terminate instances boot from volume used multipath have esidual device

2014-12-15 Thread YaoZheng_ZTE
Public bug reported:

Reproducing method as following:
1、nova.conf  configure iscsi_used_multipath_tool=multipath, restart 
nova-compute service
2、launch instance vm1 boot from volume(used HpSan),then attach volume1 for this 
vm1
3、launch instance vm2 boot from volume(used HpSan),then attach  volume2 for 
this vm2 at the same host
4、terminate vm2 
5、vm2 has been destoryed , but  /dev/disk/by-path/  device can not be 
completely removed

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1402535

Title:
  terminate instances boot from volume used multipath  have  esidual
  device

Status in OpenStack Compute (Nova):
  New

Bug description:
  Reproducing method as following:
  1、nova.conf  configure iscsi_used_multipath_tool=multipath, restart 
nova-compute service
  2、launch instance vm1 boot from volume(used HpSan),then attach volume1 for 
this vm1
  3、launch instance vm2 boot from volume(used HpSan),then attach  volume2 for 
this vm2 at the same host
  4、terminate vm2 
  5、vm2 has been destoryed , but  /dev/disk/by-path/  device can not be 
completely removed

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1402535/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1402504] [NEW] Live migration of volume backed instances broken because bug 1288039 changed

2014-12-14 Thread YaoZheng_ZTE
Public bug reported:

1、in /nova/compute/manager.py  pre_live_migration function , the table of 
block_device_mapping has been updated the destination host's target LUN id .
2、in /nova/compute/manager.py  post_live_migration_at_destination function 
,when  cleanup source host's  target LUN , first query  target LUN id from 
table of  block_device_mapping, but at the above step , the  target LUN id has 
been updated as  the destination host's target LUN id. so ,when the source and 
the destination  target LUN id are not same , error will occur.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: live-migration

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1402504

Title:
  Live migration of volume backed instances broken because bug 1288039
  changed

Status in OpenStack Compute (Nova):
  New

Bug description:
  1、in /nova/compute/manager.py  pre_live_migration function , the table of 
block_device_mapping has been updated the destination host's target LUN id .
  2、in /nova/compute/manager.py  post_live_migration_at_destination function 
,when  cleanup source host's  target LUN , first query  target LUN id from 
table of  block_device_mapping, but at the above step , the  target LUN id has 
been updated as  the destination host's target LUN id. so ,when the source and 
the destination  target LUN id are not same , error will occur.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1402504/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1402477] [NEW] Live migration of volume backed instances broken because the table of block_device_mapping was updated incorrect

2014-12-14 Thread YaoZheng_ZTE
Public bug reported:

1、 Live migration of volume backed instances
2、at pre_live_migration function,  the table of block_device_mapping has been 
updated as destination  host volume lun information
3、at /nova/compute/manager.py  _post_live_migration function, when call  the 
funfunction 
 block_device_info = self._get_instance_block_device_info( ctxt, instance, 
bdms) , because the  Parameters is incorrect , the 
 table of block_device_mapping will be changed as the source host volume 
lun information. 
4、the next step ,when run the function /nova/compute/manager.py  
post_live_migration_at_destination ,will query volume lun connection from the 
table of block_device_mapping, but  the table has been updated as  source host 
volume lun information,so
the destination  host  run the under function will failed. :
  self.volume_driver_method('connect_volume',
   connection_info,
   disk_dev)

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1402477

Title:
   Live migration of volume backed instances broken because  the table
  of block_device_mapping was updated incorrect

Status in OpenStack Compute (Nova):
  New

Bug description:
  1、 Live migration of volume backed instances
  2、at pre_live_migration function,  the table of block_device_mapping has been 
updated as destination  host volume lun information
  3、at /nova/compute/manager.py  _post_live_migration function, when call  the 
funfunction 
   block_device_info = self._get_instance_block_device_info( ctxt, 
instance, bdms) , because the  Parameters is incorrect , the 
   table of block_device_mapping will be changed as the source host volume 
lun information. 
  4、the next step ,when run the function /nova/compute/manager.py  
post_live_migration_at_destination ,will query volume lun connection from the 
table of block_device_mapping, but  the table has been updated as  source host 
volume lun information,so
  the destination  host  run the under function will failed. :
self.volume_driver_method('connect_volume',
 connection_info,
 disk_dev)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1402477/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp