[Yahoo-eng-team] [Bug 1639091] [NEW] Progress_watermark needed to be reset when next iterator start
Public bug reported: in nova/virt/libvrit/driver of _live_migration_monitor, the progress_watermark needed to be reset to 0 when progress_watermark is smaller than info.data_remaning. when migration iteration began, the progress_watermark maybe the mark of pre data_remaining of one iteration, so when in a big ram writing env, it is may cause multiple migration iteration, so it is may cause data_remaining > progress_watermark, there is a log output to recode this, so when this happened, and next iteration come in, the progress_watermark is small than data_remaining forever, and progress_time may never be reset, it's will cause progress_timeout, however, it's not progress_timeout. ** Affects: nova Importance: Undecided Assignee: linbing (hawkerous) Status: In Progress ** Changed in: nova Assignee: (unassigned) => linbing (hawkerous) ** Changed in: nova Status: New => In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1639091 Title: Progress_watermark needed to be reset when next iterator start Status in OpenStack Compute (nova): In Progress Bug description: in nova/virt/libvrit/driver of _live_migration_monitor, the progress_watermark needed to be reset to 0 when progress_watermark is smaller than info.data_remaning. when migration iteration began, the progress_watermark maybe the mark of pre data_remaining of one iteration, so when in a big ram writing env, it is may cause multiple migration iteration, so it is may cause data_remaining > progress_watermark, there is a log output to recode this, so when this happened, and next iteration come in, the progress_watermark is small than data_remaining forever, and progress_time may never be reset, it's will cause progress_timeout, however, it's not progress_timeout. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1639091/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1625487] [NEW] live_migration progress watermarker need to reset
Public bug reported: in nova/virt/libvrit/driver of _live_migration_monitor, the progress_watermark needed to be reset to 0 when progress_watermark is smaller than info.data_remaning. when migration iteration began, the progress_watermark maybe the mark of pre data_remaining of one iteration, so when in a big ram writing env, it is may cause multiple migration iteration, so it is may cause data_remaining > progress_watermark, there is a log output to recode this, so when this happened, and next iteration come in, the progress_watermark is small than data_remaining forever, and progress_time may never be reset, it's will cause progress_timeout, however, it's not progress_timeout. ** Affects: nova Importance: Undecided Assignee: linbing (hawkerous) Status: In Progress ** Changed in: nova Assignee: (unassigned) => linbing (hawkerous) ** Changed in: nova Status: New => In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1625487 Title: live_migration progress watermarker need to reset Status in OpenStack Compute (nova): In Progress Bug description: in nova/virt/libvrit/driver of _live_migration_monitor, the progress_watermark needed to be reset to 0 when progress_watermark is smaller than info.data_remaning. when migration iteration began, the progress_watermark maybe the mark of pre data_remaining of one iteration, so when in a big ram writing env, it is may cause multiple migration iteration, so it is may cause data_remaining > progress_watermark, there is a log output to recode this, so when this happened, and next iteration come in, the progress_watermark is small than data_remaining forever, and progress_time may never be reset, it's will cause progress_timeout, however, it's not progress_timeout. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1625487/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1612466] [NEW] aggregate object file does not define LOG error in Liberty
Public bug reported: the error output in nova-api.log is : 2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions [req-56dda11e-3041-4fac-8342-bb643643a1c7 e88120bc348c4f3ca37207ef4bcd3b90 43b2137632ac4ad8b2 df8c0d27f13fb8 - - -] Unexpected exception in API method 2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions Traceback (most recent call last): 2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 478, in wrap ped 2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in wrappe r 2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/openstack/compute/aggregates.py", line 169, in _remove_host 2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/exception.py", line 89, in wrapped 2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 195, in __exit__ 2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions six.reraise(self.type_, self.value, self.tb) 2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/exception.py", line 72, in wrapped 2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/compute/api.py", line 3908, in remove_host_from _aggregate 2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 213, in wrapper 2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions return fn(self, *args, **kwargs) 2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/objects/aggregate.py", line 165, in delete_host 2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/objects/aggregate.py", line 64, in update_aggre gate_for_instances 2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions NameError: global name 'LOG' is not defined 2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions 2016-08-09 14:50:19.148 4532 INFO nova.api.openstack.wsgi [req-56dda11e-3041-4fac-8342-bb643643a1c7 e88120bc348c4f3ca37207ef4bcd3b90 43b2137632ac4ad8b2df8c0d27f13fb8 - - -] HTTP exception thrown: Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. so i found that in nova/object/aggregate.py function update_aggregate_for_instance using LOG.exception to write down message in nova-api.log when instance.save error and throw exception, bug in this module does not defined LOG, so it will report Unexpected API Error. ** Affects: nova Importance: Undecided Assignee: linbing (hawkerous) Status: Confirmed ** Changed in: nova Assignee: (unassigned) => linbing (hawkerous) ** Changed in: nova Status: New => Confirmed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1612466 Title: aggregate object file does not define LOG error in Liberty Status in OpenStack Compute (nova): Confirmed Bug description: the error output in nova-api.log is : 2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions [req-56dda11e-3041-4fac-8342-bb643643a1c7 e88120bc348c4f3ca37207ef4bcd3b90 43b2137632ac4ad8b2 df8c0d27f13fb8 - - -] Unexpected exception in API method 2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions Traceback (most recent call last): 2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 478, in wrap ped 2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in wrappe r 2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/openstack/compute/aggregates.py", line 169, in _remove_host 2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/exception.py", line 89, in wrapped 2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 195, in __exit__ 2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions six.reraise(self.type_, self.value, self.tb)
[Yahoo-eng-team] [Bug 1597160] [NEW] linuxbridget agent-id is not linuxbridge agent uuid
Public bug reported: i found that linuxbridget call rpc to api server,call method is 'get_devices_details_list' and pass a param agent_id to rpc server. and i also found that param agent_id does not used in api server, it is just user to debug output message, so i think agent_id is just to used for mark the incoming agent. and if it is used like that, why doesn't use agent uuid instead of 'lb'+pythical_interface mac(like lb00505636ff2d), because uuid is more quickly to see the incoming linuxbridge host than mac address. of course you have another ways to know the incoming linuxbrige host linke param host,so why not delete the useless param ./server.log:48649:2016-06-27 16:36:56.403 108124 DEBUG neutron.plugins.ml2.rpc [req-d8a17733-6e4e-4012-8157-e7b9403c127d - - - - -] Device tapb57d75e8-f4 details requested by agent lb00505636ff2d with host com-net get_device_details /usr/lib/python2.7/site- packages/neutron/plugins/ml2/rpc.py:70 ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1597160 Title: linuxbridget agent-id is not linuxbridge agent uuid Status in neutron: New Bug description: i found that linuxbridget call rpc to api server,call method is 'get_devices_details_list' and pass a param agent_id to rpc server. and i also found that param agent_id does not used in api server, it is just user to debug output message, so i think agent_id is just to used for mark the incoming agent. and if it is used like that, why doesn't use agent uuid instead of 'lb'+pythical_interface mac(like lb00505636ff2d), because uuid is more quickly to see the incoming linuxbridge host than mac address. of course you have another ways to know the incoming linuxbrige host linke param host,so why not delete the useless param ./server.log:48649:2016-06-27 16:36:56.403 108124 DEBUG neutron.plugins.ml2.rpc [req-d8a17733-6e4e-4012-8157-e7b9403c127d - - - - -] Device tapb57d75e8-f4 details requested by agent lb00505636ff2d with host com-net get_device_details /usr/lib/python2.7/site- packages/neutron/plugins/ml2/rpc.py:70 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1597160/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1528114] [NEW] vmware start instance from snapshot error
Public bug reported: 1. I take a snapshot from vmware instance, then the snapshot image(which is link_clone of snapshot) will be saved in glance server. 2. Start from this snapshot image from glance server. then got the following error 2015-12-15 01:32:05.255 25992 DEBUG oslo_vmware.api [-] Invoking VIM API to read info of task: (returnval){ value = "task-1896" _type = "Task" }. _poll_task /usr/lib/python2.7/site-packages/oslo_vmware/api.py:397 2015-12-15 01:32:05.255 25992 DEBUG oslo_vmware.api [-] Waiting for function _invoke_api to return. func /usr/lib/python2.7/site-packages/oslo_vmware/api.py:121 2015-12-15 01:32:05.285 25992 DEBUG oslo_vmware.exceptions [-] Fault InvalidArgument not matched. get_fault_class /usr/lib/python2.7/site-packages/oslo_vmware/exceptions.py:296 2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall [-] in fixed duration looping call 2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall Traceback (most recent call last): 2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall File "/usr/lib/python2.7/site-packages/oslo_vmware/common/loopingcall.py", line 76, in _inner 2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall self.f(*self.args, **self.kw) 2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall File "/usr/lib/python2.7/site-packages/oslo_vmware/api.py", line 428, in _poll_task 2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall raise task_ex 2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall VimFaultException: 指定的参数错误。 2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall capacity 2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall Faults: ['InvalidArgument'] 2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall 2015-12-15 01:32:05.286 25992 ERROR nova.virt.vmwareapi.vmops [req-c466c53c-0a9c-45d7-aa78-c8812b4021a2 4b7fde8604c24e919e46b68fdf50b5a5 b0eab665ecd94e86885e03027ab90528 - - -] [instance: bea53465-ac4f-40f4-9937-f99024a8075d] Extending virtual disk failed with error: 指定的参数错误。 capacity 3. I track the error, nova/virt/vmwareapi/vmops.py def spawn() self._use_disk_image_as_linked_clone(vm_ref, vi) self._extend_if_required self._extend_virtual_disk() def _extend_virtual_disk() vmdk_extend_task = self._session._call_method( self._session.vim, "ExtendVirtualDisk_Task", service_content.virtualDiskManager, name=name, datacenter=dc_ref, newCapacityKb=requested_size, eagerZero=False) my vimserver is : /opt/stack/vmware/wsdl/5.0/vimService.wsdl vcenter version is :5.1.0 openstack version is : Liberty 4. When I shield _extend_if_required in _use_disk_image_as_linked_clone, then will be successful. ** Affects: nova Importance: Undecided Status: New ** Tags: vmware ** Tags added: vmware -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1528114 Title: vmware start instance from snapshot error Status in OpenStack Compute (nova): New Bug description: 1. I take a snapshot from vmware instance, then the snapshot image(which is link_clone of snapshot) will be saved in glance server. 2. Start from this snapshot image from glance server. then got the following error 2015-12-15 01:32:05.255 25992 DEBUG oslo_vmware.api [-] Invoking VIM API to read info of task: (returnval){ value = "task-1896" _type = "Task" }. _poll_task /usr/lib/python2.7/site-packages/oslo_vmware/api.py:397 2015-12-15 01:32:05.255 25992 DEBUG oslo_vmware.api [-] Waiting for function _invoke_api to return. func /usr/lib/python2.7/site-packages/oslo_vmware/api.py:121 2015-12-15 01:32:05.285 25992 DEBUG oslo_vmware.exceptions [-] Fault InvalidArgument not matched. get_fault_class /usr/lib/python2.7/site-packages/oslo_vmware/exceptions.py:296 2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall [-] in fixed duration looping call 2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall Traceback (most recent call last): 2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall File "/usr/lib/python2.7/site-packages/oslo_vmware/common/loopingcall.py", line 76, in _inner 2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall self.f(*self.args, **self.kw) 2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall File "/usr/lib/python2.7/site-packages/oslo_vmware/api.py", line 428, in _poll_task 2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall raise task_ex 2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall VimFaultException: 指定的参数错误。 2015-12-15 01:32:05.285 25992 ERROR
[Yahoo-eng-team] [Bug 1525913] [NEW] vmwareapi get vcenter cluster
Public bug reported: In liberty nova/virt/vmwareapi/vm_util.py def get_cluster_ref_by_name(session, cluster_name): """Get reference to the vCenter cluster with the specified name.""" all_clusters = get_all_cluster_mors(session) for cluster in all_clusters: if (hasattr(cluster, 'propSet') and cluster.propSet[0].val == cluster_name): return cluster.obj when all_cluster is None,this code may cause error:TypeError: 'NoneType' object is not iterable,and nova-computer would't up ** Affects: nova Importance: Undecided Assignee: linbing (hawkerous) Status: New ** Changed in: nova Assignee: (unassigned) => linbing (hawkerous) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1525913 Title: vmwareapi get vcenter cluster Status in OpenStack Compute (nova): New Bug description: In liberty nova/virt/vmwareapi/vm_util.py def get_cluster_ref_by_name(session, cluster_name): """Get reference to the vCenter cluster with the specified name.""" all_clusters = get_all_cluster_mors(session) for cluster in all_clusters: if (hasattr(cluster, 'propSet') and cluster.propSet[0].val == cluster_name): return cluster.obj when all_cluster is None,this code may cause error:TypeError: 'NoneType' object is not iterable,and nova-computer would't up To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1525913/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1525797] [NEW] vmwareapi get clusters error
Public bug reported: In nova/nova/virt/vmwareapi/vm_util.py, function get_cluster_ref_by_name has a Potential bug all_clusters = get_all_cluster_mors(session) for cluster in all_clusters: if all_clusters in None, then this code may cause error: TypeError: 'NoneType' object is not iterable. so,you must judge all_clusters is not None type, then do the code : for cluster in all_clusters, otherwise return {} ** Affects: nova Importance: Undecided Assignee: linbing (hawkerous) Status: Fix Released ** Changed in: nova Assignee: (unassigned) => linbing (hawkerous) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1525797 Title: vmwareapi get clusters error Status in OpenStack Compute (nova): Fix Released Bug description: In nova/nova/virt/vmwareapi/vm_util.py, function get_cluster_ref_by_name has a Potential bug all_clusters = get_all_cluster_mors(session) for cluster in all_clusters: if all_clusters in None, then this code may cause error: TypeError: 'NoneType' object is not iterable. so,you must judge all_clusters is not None type, then do the code : for cluster in all_clusters, otherwise return {} To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1525797/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1525797] Re: vmwareapi get clusters error
In nova/nova/virt/vmwareapi/vm_util.py, function get_cluster_ref_by_name def get_cluster_ref_by_name(session, cluster_name): """Get reference to the vCenter cluster with the specified name.""" all_clusters = get_all_cluster_mors(session) if not all_clusters: return {} for cluster in all_clusters: if (hasattr(cluster, 'propSet') and cluster.propSet[0].val == cluster_name): return cluster.obj ** Changed in: nova Status: New => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1525797 Title: vmwareapi get clusters error Status in OpenStack Compute (nova): Fix Released Bug description: In nova/nova/virt/vmwareapi/vm_util.py, function get_cluster_ref_by_name has a Potential bug all_clusters = get_all_cluster_mors(session) for cluster in all_clusters: if all_clusters in None, then this code may cause error: TypeError: 'NoneType' object is not iterable. so,you must judge all_clusters is not None type, then do the code : for cluster in all_clusters, otherwise return {} To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1525797/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp