[Yahoo-eng-team] [Bug 1798806] Re: Race condition between RT and scheduler

2018-10-22 Thread Radoslav Gerganov
*** This bug is a duplicate of bug 1729621 ***
https://bugs.launchpad.net/bugs/1729621

I just found that this problem is fixed in the master branch as part of
bug #1729621.  However, it is not backported to stable releases.

** This bug has been marked a duplicate of bug 1729621
   Inconsistent value for vcpu_used

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1798806

Title:
  Race condition between RT and scheduler

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The HostState object which is used by the scheduler is using the
  'stats' property of the compute node to derive its own values, e.g. :

  self.stats = compute.stats or {}
  self.num_instances = int(self.stats.get('num_instances', 0))
  self.num_io_ops = int(self.stats.get('io_workload', 0))
  self.failed_builds = int(self.stats.get('failed_builds', 0))

  These values are used for both filtering and weighing compute hosts.
  However, the 'stats' property of the compute node is cleared during
  the periodic update_available_resources() and populated again. The
  clearing occurs in RT._copy_resources() and it preserves only the old
  value of 'failed_builds'. This creates a race condition between RT and
  scheduler which may result into populating wrong values for
  'num_io_ops' and 'num_instances' into the HostState object and thus
  leading to incorrect scheduling decisions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1798806/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1798806] [NEW] Race condition between RT and scheduler

2018-10-19 Thread Radoslav Gerganov
Public bug reported:

The HostState object which is used by the scheduler is using the 'stats'
property of the compute node to derive its own values, e.g. :

self.stats = compute.stats or {}
self.num_instances = int(self.stats.get('num_instances', 0))
self.num_io_ops = int(self.stats.get('io_workload', 0))
self.failed_builds = int(self.stats.get('failed_builds', 0))

These values are used for both filtering and weighing compute hosts.
However, the 'stats' property of the compute node is cleared during the
periodic update_available_resources() and populated again. The clearing
occurs in RT._copy_resources() and it preserves only the old value of
'failed_builds'. This creates a race condition between RT and scheduler
which may result into populating wrong values for 'num_io_ops' and
'num_instances' into the HostState object and thus leading to incorrect
scheduling decisions.

** Affects: nova
 Importance: High
 Assignee: Radoslav Gerganov (rgerganov)
 Status: In Progress


** Tags: scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1798806

Title:
  Race condition between RT and scheduler

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The HostState object which is used by the scheduler is using the
  'stats' property of the compute node to derive its own values, e.g. :

  self.stats = compute.stats or {}
  self.num_instances = int(self.stats.get('num_instances', 0))
  self.num_io_ops = int(self.stats.get('io_workload', 0))
  self.failed_builds = int(self.stats.get('failed_builds', 0))

  These values are used for both filtering and weighing compute hosts.
  However, the 'stats' property of the compute node is cleared during
  the periodic update_available_resources() and populated again. The
  clearing occurs in RT._copy_resources() and it preserves only the old
  value of 'failed_builds'. This creates a race condition between RT and
  scheduler which may result into populating wrong values for
  'num_io_ops' and 'num_instances' into the HostState object and thus
  leading to incorrect scheduling decisions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1798806/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1790126] [NEW] Scheduler is dumping all instances on a compute host

2018-08-31 Thread Radoslav Gerganov
Public bug reported:

There is a log message in the scheduler which dumps all instances
running on a compute host:

LOG.debug("Update host state with instances: %s", inst_dict)

There are at least 2 problems with this:
  1. it generates huge amount of logs which are not really useful
  2. it crashes when there is an instance with non-ascii name

** Affects: nova
 Importance: Low
 Assignee: Radoslav Gerganov (rgerganov)
 Status: In Progress

** Changed in: nova
   Importance: Undecided => Low

** Changed in: nova
 Assignee: (unassigned) => Radoslav Gerganov (rgerganov)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1790126

Title:
  Scheduler is dumping all instances on a compute host

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  There is a log message in the scheduler which dumps all instances
  running on a compute host:

  LOG.debug("Update host state with instances: %s", inst_dict)

  There are at least 2 problems with this:
1. it generates huge amount of logs which are not really useful
2. it crashes when there is an instance with non-ascii name

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1790126/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1784666] [NEW] The thread local which stores the request context is not green

2018-07-31 Thread Radoslav Gerganov
Public bug reported:

nova-compute imports oslo.context before calling monkey_patch():

(Pdb) bt
  /usr/local/bin/nova-compute(6)()
-> from nova.cmd.compute import main
  /opt/stack/nova/nova/__init__.py(33)()
-> import oslo_service  # noqa
  /usr/local/lib/python2.7/dist-packages/oslo_service/__init__.py(17)()
-> from oslo_log import log as logging
  /usr/local/lib/python2.7/dist-packages/oslo_log/log.py(48)()
-> from oslo_log import formatters
  /usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py(28)()
-> from oslo_context import context as context_utils
> /usr/local/lib/python2.7/dist-packages/oslo_context/context.py(40)()
-> _request_store = threading.local()

which makes the global thread-local variable (_request_store) not green.
So instead of having request context per green thread, we have one
context for all green threads which is overwritten every time a new
context is created.

This is regression from this patch:

https://review.openstack.org/#/c/434327/

which imports oslo.service before eventlet

** Affects: nova
 Importance: Medium
 Status: Triaged

** Affects: nova/ocata
 Importance: Undecided
 Status: New

** Affects: nova/pike
 Importance: Undecided
 Status: New

** Affects: nova/queens
 Importance: Undecided
 Status: New

** Changed in: nova
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1784666

Title:
  The thread local which stores the request context is not green

Status in OpenStack Compute (nova):
  Triaged
Status in OpenStack Compute (nova) ocata series:
  New
Status in OpenStack Compute (nova) pike series:
  New
Status in OpenStack Compute (nova) queens series:
  New

Bug description:
  nova-compute imports oslo.context before calling monkey_patch():

  (Pdb) bt
/usr/local/bin/nova-compute(6)()
  -> from nova.cmd.compute import main
/opt/stack/nova/nova/__init__.py(33)()
  -> import oslo_service  # noqa

/usr/local/lib/python2.7/dist-packages/oslo_service/__init__.py(17)()
  -> from oslo_log import log as logging
/usr/local/lib/python2.7/dist-packages/oslo_log/log.py(48)()
  -> from oslo_log import formatters
/usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py(28)()
  -> from oslo_context import context as context_utils
  > /usr/local/lib/python2.7/dist-packages/oslo_context/context.py(40)()
  -> _request_store = threading.local()

  which makes the global thread-local variable (_request_store) not
  green. So instead of having request context per green thread, we have
  one context for all green threads which is overwritten every time a
  new context is created.

  This is regression from this patch:

  https://review.openstack.org/#/c/434327/

  which imports oslo.service before eventlet

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1784666/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1782309] [NEW] Confusing log message from scheduler

2018-07-18 Thread Radoslav Gerganov
Public bug reported:

If there are not enough resources for spawning an instance, we get this
message in the scheduler log:

"""
Got no allocation candidates from the Placement API. This may be a temporary 
occurrence as compute nodes start up and begin reporting inventory to the 
Placement service.
"""

Not having enough resources for creating the instance is a lot more
probable reason for no allocation candidates than compute node is just
starting up. I think the log message should state both reasons, not
having enough resources being first.

** Affects: nova
 Importance: Low
     Assignee: Radoslav Gerganov (rgerganov)
 Status: New

** Changed in: nova
   Importance: Undecided => Low

** Changed in: nova
 Assignee: (unassigned) => Radoslav Gerganov (rgerganov)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1782309

Title:
  Confusing log message from scheduler

Status in OpenStack Compute (nova):
  New

Bug description:
  If there are not enough resources for spawning an instance, we get
  this message in the scheduler log:

  """
  Got no allocation candidates from the Placement API. This may be a temporary 
occurrence as compute nodes start up and begin reporting inventory to the 
Placement service.
  """

  Not having enough resources for creating the instance is a lot more
  probable reason for no allocation candidates than compute node is just
  starting up. I think the log message should state both reasons, not
  having enough resources being first.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1782309/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416000] Re: VMware: write error lost while transferring volume

2017-10-13 Thread Radoslav Gerganov
This has been fixed in Nova as part of the image transfer refactoring long time 
ago:
https://github.com/openstack/nova/commit/2df83abaa0a5c828421fc38602cc1e5145b46ff4

** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1416000

Title:
  VMware: write error lost while transferring volume

Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.vmware:
  Confirmed

Bug description:
  I'm running the following command:

  cinder create --image-id a24f216f-9746-418e-97f9-aebd7fa0e25f 1

  The write side of the data transfer (a VMwareHTTPWriteFile object)
  returns an error in write() which I haven't debugged, yet. However,
  this error is never reported to the user, although it does show up in
  the logs. The effect is that the transfer sits in the 'downloading'
  state until the 7200 second timeout, when it reports the timeout.

  The reason is that the code which waits on transfer completion (in
  start_transfer) does:

  try:
  # Wait on the read and write events to signal their end
  read_event.wait()
  write_event.wait()
  except (timeout.Timeout, Exception) as exc:
  ...

  That is, it waits for the read thread to signal completion via
  read_event before checking write_event. However, because write_thread
  has died, read_thread is blocking and will never signal completion.
  You can demonstrate this by swapping the order. If you want for write
  first it will die immediately, which is what you want. However, that's
  not right either because now you're missing read errors.

  Ideally this code needs to be able to notice an error at either end
  and stop immediately.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1416000/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1516936] Re: [VMware]: migrate/resize failed in same host with vmware driver

2017-10-13 Thread Radoslav Gerganov
I am not able to reproduce this problem running latest Nova and Neutron
with DVS plugin. I think this was a problem when using nova-network
which is no longer supported, so I will resolve the bug as "Invalid".
Feel free to reopen if you manage to reproduce.

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1516936

Title:
  [VMware]: migrate/resize failed in same host with vmware driver

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Same to bug 1161226,  this issue exist for vmware driver , when
  resizing a vmware instance with option "allow_resize_to_same_host =
  True" in nova.conf, the resize operation failed due to report using
  duplicated mac address . Cold migrate also failed .This only occurred
  resize/migrate in same host, not between two hosts .

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1516936/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1536543] Re: [vmware] with nova-network, vmware driver can not connect vm to portgroup

2017-10-12 Thread Radoslav Gerganov
nova-network is no longer supported, you should use Neutron with either
DVS or NSX

** Changed in: nova
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1536543

Title:
  [vmware] with nova-network, vmware driver can not connect vm to
  portgroup

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  version:
  kilo

  steps:
  with nova-notwork vlan manager,  when booting vm,   vmware driver find 
corrent vSwitch according to vlan_interface, and create portgroup on the switch 
of  ESXi A.
  But when creating vm,  cluster DRS will allocate vm to ESXi B.  ESXi B have 
not correct portgroup.

  Result:
  VM can not communicate with other vm

  suggestion:
  VMware driver should sync networking configuration between ESXi.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1536543/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1694543] Re: VMware CI is failing with error "VimFaultException: A specified parameter was not correct: Faults: ['InvalidArgument']"

2017-10-12 Thread Radoslav Gerganov
This has been fixed now

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1694543

Title:
  VMware CI is failing with error "VimFaultException: A specified
  parameter was not correct: Faults: ['InvalidArgument']"

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Seen here:

  http://207.189.188.190/logs/47/468147/1/check-vote/ext-nova-
  zuul/cbab6d1/n-cpu.log.gz?level=TRACE#_2017-05-26_22_56_46_579

  Which resulted in nova-compute auto-disabling the service.

  2017-05-26 22:55:27.826 19424 ERROR oslo_vmware.common.loopingcall [-] in 
fixed duration looping call
  2017-05-26 22:55:27.826 19424 ERROR oslo_vmware.common.loopingcall Traceback 
(most recent call last):
  2017-05-26 22:55:27.826 19424 ERROR oslo_vmware.common.loopingcall   File 
"/opt/stack/oslo.vmware/oslo_vmware/common/loopingcall.py", line 75, in _inner
  2017-05-26 22:55:27.826 19424 ERROR oslo_vmware.common.loopingcall 
self.f(*self.args, **self.kw)
  2017-05-26 22:55:27.826 19424 ERROR oslo_vmware.common.loopingcall   File 
"/opt/stack/oslo.vmware/oslo_vmware/api.py", line 452, in _poll_task
  2017-05-26 22:55:27.826 19424 ERROR oslo_vmware.common.loopingcall raise 
task_ex
  2017-05-26 22:55:27.826 19424 ERROR oslo_vmware.common.loopingcall 
FileNotFoundException: File [mgmt-ds] 
192.168.1.6_base/18b6f8c1-a6f3-4d32-9ff9-76403680a8ee was not found
  2017-05-26 22:55:27.826 19424 ERROR oslo_vmware.common.loopingcall 
  2017-05-26 22:55:28.386 19424 ERROR oslo_vmware.common.loopingcall [-] in 
fixed duration looping call
  2017-05-26 22:55:28.386 19424 ERROR oslo_vmware.common.loopingcall Traceback 
(most recent call last):
  2017-05-26 22:55:28.386 19424 ERROR oslo_vmware.common.loopingcall   File 
"/opt/stack/oslo.vmware/oslo_vmware/common/loopingcall.py", line 75, in _inner
  2017-05-26 22:55:28.386 19424 ERROR oslo_vmware.common.loopingcall 
self.f(*self.args, **self.kw)
  2017-05-26 22:55:28.386 19424 ERROR oslo_vmware.common.loopingcall   File 
"/opt/stack/oslo.vmware/oslo_vmware/api.py", line 452, in _poll_task
  2017-05-26 22:55:28.386 19424 ERROR oslo_vmware.common.loopingcall raise 
task_ex
  2017-05-26 22:55:28.386 19424 ERROR oslo_vmware.common.loopingcall 
FileNotFoundException: File [mgmt-ds] 
192.168.1.6_base/18b6f8c1-a6f3-4d32-9ff9-76403680a8ee was not found
  2017-05-26 22:55:28.386 19424 ERROR oslo_vmware.common.loopingcall 
  2017-05-26 22:55:28.945 19424 ERROR oslo_vmware.common.loopingcall [-] in 
fixed duration looping call
  2017-05-26 22:55:28.945 19424 ERROR oslo_vmware.common.loopingcall Traceback 
(most recent call last):
  2017-05-26 22:55:28.945 19424 ERROR oslo_vmware.common.loopingcall   File 
"/opt/stack/oslo.vmware/oslo_vmware/common/loopingcall.py", line 75, in _inner
  2017-05-26 22:55:28.945 19424 ERROR oslo_vmware.common.loopingcall 
self.f(*self.args, **self.kw)
  2017-05-26 22:55:28.945 19424 ERROR oslo_vmware.common.loopingcall   File 
"/opt/stack/oslo.vmware/oslo_vmware/api.py", line 452, in _poll_task
  2017-05-26 22:55:28.945 19424 ERROR oslo_vmware.common.loopingcall raise 
task_ex
  2017-05-26 22:55:28.945 19424 ERROR oslo_vmware.common.loopingcall 
VimFaultException: A specified parameter was not correct: 
  2017-05-26 22:55:28.945 19424 ERROR oslo_vmware.common.loopingcall Faults: 
['InvalidArgument']
  2017-05-26 22:55:28.945 19424 ERROR oslo_vmware.common.loopingcall 
  2017-05-26 22:55:28.946 19424 ERROR nova.compute.manager 
[req-93e2d8e1-a1ea-42de-8103-69196f07fd53 
tempest-AggregatesAdminTestJSON-2070234197 
tempest-AggregatesAdminTestJSON-2070234197] [instance: 
90960b16-aa62-46ba-82d3-842d7439d978] Instance failed to spawn
  2017-05-26 22:55:28.946 19424 ERROR nova.compute.manager [instance: 
90960b16-aa62-46ba-82d3-842d7439d978] Traceback (most recent call last):
  2017-05-26 22:55:28.946 19424 ERROR nova.compute.manager [instance: 
90960b16-aa62-46ba-82d3-842d7439d978]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2157, in _build_resources
  2017-05-26 22:55:28.946 19424 ERROR nova.compute.manager [instance: 
90960b16-aa62-46ba-82d3-842d7439d978] yield resources
  2017-05-26 22:55:28.946 19424 ERROR nova.compute.manager [instance: 
90960b16-aa62-46ba-82d3-842d7439d978]   File 
"/opt/stack/nova/nova/compute/manager.py", line 1963, in _build_and_run_instance
  2017-05-26 22:55:28.946 19424 ERROR nova.compute.manager [instance: 
90960b16-aa62-46ba-82d3-842d7439d978] block_device_info=block_device_info)
  2017-05-26 22:55:28.946 19424 ERROR nova.compute.manager [instance: 
90960b16-aa62-46ba-82d3-842d7439d978]   File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 324, in spawn
  2017-05-26 22:55:28.946 19424 ERROR nova.compute.manager [instance: 
90960b16-aa62-46ba-82d3-842d7439d978] admin_password, network_info, 
block_device_info)
  

[Yahoo-eng-team] [Bug 1709287] [NEW] Volume detach fails if there are multiple BDM entries

2017-08-08 Thread Radoslav Gerganov
Public bug reported:

Steps to reproduce:
1. Attaching volume to an instance fails because of an RPC timeout when 
nova-api calls nova-compute to create BDM
2. Attaching the same volume to the same instance succeeds the second time
3. There are two BDMs for this volume and one of them has empty 
connection_info.  When we try to detach the volume, an error is thrown because 
of the stale BDM entry created on step 1:

[req-b14eb2a2-10bc-4b1a-b62f-ead07947eb66 7c0126911c154f3db23e4f013c70f5aa 
b006cefe78734655ad29cf49445f2f67 - - -] Exception during message handling: 
 can't be decoded
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", 
line 138, in _dispatch_and_reply
incoming.message))
  File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", 
line 185, in _dispatch
return self._do_dispatch(endpoint, method, ctxt, args)
  File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", 
line 127, in _do_dispatch
result = func(ctxt, **new_args)
  File "/usr/lib/python2.7/dist-packages/osprofiler/profiler.py", line 154, in 
wrapper
return f(*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 110, in 
wrapped
payload)
  File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
self.force_reraise()
  File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
six.reraise(self.type_, self.value, self.tb)
  File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 89, in wrapped
return f(self, context, *args, **kw)
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 395, in 
decorated_function
kwargs['instance'], e, sys.exc_info())
  File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
self.force_reraise()
  File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
six.reraise(self.type_, self.value, self.tb)
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 383, in 
decorated_function
return function(self, context, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 466, in 
decorated_function
instance=instance)
  File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
self.force_reraise()
  File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
six.reraise(self.type_, self.value, self.tb)
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 456, in 
decorated_function
*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4976, 
in detach_volume
attachment_id=attachment_id)
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4906, 
in _detach_volume
connection_info = jsonutils.loads(bdm.connection_info)
  File "/usr/lib/python2.7/dist-packages/oslo_serialization/jsonutils.py", line 
229, in loads
return json.loads(encodeutils.safe_decode(s, encoding), **kwargs)
  File "/usr/lib/python2.7/dist-packages/oslo_utils/encodeutils.py", line 39, 
in safe_decode
raise TypeError("%s can't be decoded" % type(text))
TypeError:  can't be decoded


It is not easy to catch the timeout and then delete the BDM entry because the 
entry may get created later after the timeout (we have seen this in our 
environment). Also, we may accidentally delete the entry created by a 
concurrent attach request.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1709287

Title:
  Volume detach fails if there are multiple BDM entries

Status in OpenStack Compute (nova):
  New

Bug description:
  Steps to reproduce:
  1. Attaching volume to an instance fails because of an RPC timeout when 
nova-api calls nova-compute to create BDM
  2. Attaching the same volume to the same instance succeeds the second time
  3. There are two BDMs for this volume and one of them has empty 
connection_info.  When we try to detach the volume, an error is thrown because 
of the stale BDM entry created on step 1:

  [req-b14eb2a2-10bc-4b1a-b62f-ead07947eb66 7c0126911c154f3db23e4f013c70f5aa 
b006cefe78734655ad29cf49445f2f67 - - -] Exception during message handling: 
 can't be decoded
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", 
line 138, in _dispatch_and_reply
  incoming.message))
File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", 
line 185, in _dispatch
  return self._do_dispatch(endpoint, method, ctxt, args)
File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", 
line 127, in _do_dispatch
  result = 

[Yahoo-eng-team] [Bug 1704952] [NEW] VMware: Concurrent nova-compute service initialization may fail

2017-07-18 Thread Radoslav Gerganov
Public bug reported:

During initialization, the VMware Nova compute driver checks whether a
VC extension with key 'org.openstack.compute' exists and if not, it
registers one. This is a race condition. If multiple services try to
register the same extension, only one of them will succeed. The fix is
to catch InvalidArgument fault from vSphere API and ignore the
exception.

** Affects: nova
 Importance: Low
 Assignee: Radoslav Gerganov (rgerganov)
 Status: In Progress


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1704952

Title:
  VMware: Concurrent nova-compute service initialization may fail

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  During initialization, the VMware Nova compute driver checks whether a
  VC extension with key 'org.openstack.compute' exists and if not, it
  registers one. This is a race condition. If multiple services try to
  register the same extension, only one of them will succeed. The fix is
  to catch InvalidArgument fault from vSphere API and ignore the
  exception.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1704952/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1674946] [NEW] cloud-init fails with "Unknown network_data link type: dvs"

2017-03-22 Thread Radoslav Gerganov
Public bug reported:

When booting an OpenStack instance, cloud-init fails with:

[   33.307325] cloud-init[445]: Cloud-init v. 0.7.9 running 'init-local' at 
Mon, 20 Mar 2017 14:42:58 +. Up 31.06 seconds.
[   33.368434] cloud-init[445]: 2017-03-20 14:43:00,779 - util.py[WARNING]: 
failed stage init-local
[   33.449886] cloud-init[445]: failed run of stage init-local
[   33.490863] cloud-init[445]: 

[   33.542214] cloud-init[445]: Traceback (most recent call last):
[   33.585204] cloud-init[445]:   File 
"/usr/lib/python3/dist-packages/cloudinit/cmd/main.py", line 513, in 
status_wrapper
[   33.654579] cloud-init[445]: ret = functor(name, args)
[   33.696372] cloud-init[445]:   File 
"/usr/lib/python3/dist-packages/cloudinit/cmd/main.py", line 269, in main_init
[   33.755593] cloud-init[445]: 
init.apply_network_config(bring_up=bool(mode != sources.DSMODE_LOCAL))
[   33.809124] cloud-init[445]:   File 
"/usr/lib/python3/dist-packages/cloudinit/stages.py", line 622, in 
apply_network_config
[   33.847161] cloud-init[445]: netcfg, src = self._find_networking_config()
[   33.876562] cloud-init[445]:   File 
"/usr/lib/python3/dist-packages/cloudinit/stages.py", line 609, in 
_find_networking_config
[   33.916335] cloud-init[445]: if self.datasource and 
hasattr(self.datasource, 'network_config'):
[   33.956207] cloud-init[445]:   File 
"/usr/lib/python3/dist-packages/cloudinit/sources/DataSourceConfigDrive.py", 
line 147, in network_config
[   34.008213] cloud-init[445]: self.network_json, 
known_macs=self.known_macs)
[   34.049714] cloud-init[445]:   File 
"/usr/lib/python3/dist-packages/cloudinit/sources/helpers/openstack.py", line 
627, in convert_net_json
[   34.104226] cloud-init[445]: 'Unknown network_data link type: %s' % 
link['type'])
[   34.144219] cloud-init[445]: ValueError: Unknown network_data link type: dvs
[   34.175934] cloud-init[445]: 


I am using Neutron with the Simple DVS plugin.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1674946

Title:
  cloud-init fails with "Unknown network_data link type: dvs"

Status in cloud-init:
  New

Bug description:
  When booting an OpenStack instance, cloud-init fails with:

  [   33.307325] cloud-init[445]: Cloud-init v. 0.7.9 running 'init-local' at 
Mon, 20 Mar 2017 14:42:58 +. Up 31.06 seconds.
  [   33.368434] cloud-init[445]: 2017-03-20 14:43:00,779 - util.py[WARNING]: 
failed stage init-local
  [   33.449886] cloud-init[445]: failed run of stage init-local
  [   33.490863] cloud-init[445]: 

  [   33.542214] cloud-init[445]: Traceback (most recent call last):
  [   33.585204] cloud-init[445]:   File 
"/usr/lib/python3/dist-packages/cloudinit/cmd/main.py", line 513, in 
status_wrapper
  [   33.654579] cloud-init[445]: ret = functor(name, args)
  [   33.696372] cloud-init[445]:   File 
"/usr/lib/python3/dist-packages/cloudinit/cmd/main.py", line 269, in main_init
  [   33.755593] cloud-init[445]: 
init.apply_network_config(bring_up=bool(mode != sources.DSMODE_LOCAL))
  [   33.809124] cloud-init[445]:   File 
"/usr/lib/python3/dist-packages/cloudinit/stages.py", line 622, in 
apply_network_config
  [   33.847161] cloud-init[445]: netcfg, src = 
self._find_networking_config()
  [   33.876562] cloud-init[445]:   File 
"/usr/lib/python3/dist-packages/cloudinit/stages.py", line 609, in 
_find_networking_config
  [   33.916335] cloud-init[445]: if self.datasource and 
hasattr(self.datasource, 'network_config'):
  [   33.956207] cloud-init[445]:   File 
"/usr/lib/python3/dist-packages/cloudinit/sources/DataSourceConfigDrive.py", 
line 147, in network_config
  [   34.008213] cloud-init[445]: self.network_json, 
known_macs=self.known_macs)
  [   34.049714] cloud-init[445]:   File 
"/usr/lib/python3/dist-packages/cloudinit/sources/helpers/openstack.py", line 
627, in convert_net_json
  [   34.104226] cloud-init[445]: 'Unknown network_data link type: %s' % 
link['type'])
  [   34.144219] cloud-init[445]: ValueError: Unknown network_data link type: 
dvs
  [   34.175934] cloud-init[445]: 


  I am using Neutron with the Simple DVS plugin.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1674946/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1627693] [NEW] VMware: do not check if inventory folder already exists

2016-09-26 Thread Radoslav Gerganov
Public bug reported:

When creating a project folder, we iterate the existing folders to check
if it is already created. This may take long time if there are many
project folders. Given that project folders are never deleted, we should
avoid iterating the existing folders. Instead, we should always create
the project folder and handle DuplicateName exceptions.

** Affects: nova
 Importance: Low
 Assignee: Radoslav Gerganov (rgerganov)
 Status: In Progress


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1627693

Title:
  VMware: do not check if inventory folder already exists

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  When creating a project folder, we iterate the existing folders to check
  if it is already created. This may take long time if there are many
  project folders. Given that project folders are never deleted, we should
  avoid iterating the existing folders. Instead, we should always create
  the project folder and handle DuplicateName exceptions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1627693/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1602293] [NEW] The API doc for os-console-auth-tokens is wrong

2016-07-12 Thread Radoslav Gerganov
Public bug reported:

There are multiple errors in the API doc for os-console-auth-tokens
(name, description, params)

** Affects: nova
 Importance: Low
 Assignee: Radoslav Gerganov (rgerganov)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Radoslav Gerganov (rgerganov)

** Changed in: nova
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1602293

Title:
  The API doc for os-console-auth-tokens is wrong

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  There are multiple errors in the API doc for os-console-auth-tokens
  (name, description, params)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1602293/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1559026] [NEW] nova-novncproxy doesn't start when 'record' is specified

2016-03-20 Thread Radoslav Gerganov
Public bug reported:

$ /usr/local/bin/nova-novncproxy --record --config-file
/etc/nova/nova.conf --web /opt/stack/noVNC

2016-03-18 12:34:01.940 CRITICAL nova [-] AttributeError: 'bool' object
has no attribute 'startswith'

2016-03-18 12:34:01.940 TRACE nova Traceback (most recent call last):
2016-03-18 12:34:01.940 TRACE nova   File "/usr/local/bin/nova-novncproxy", 
line 10, in 
2016-03-18 12:34:01.940 TRACE nova sys.exit(main())
2016-03-18 12:34:01.940 TRACE nova   File 
"/opt/stack/nova/nova/cmd/novncproxy.py", line 39, in main
2016-03-18 12:34:01.940 TRACE nova port=CONF.vnc.novncproxy_port)
2016-03-18 12:34:01.940 TRACE nova   File 
"/opt/stack/nova/nova/cmd/baseproxy.py", line 73, in proxy
2016-03-18 12:34:01.940 TRACE nova 
RequestHandlerClass=websocketproxy.NovaProxyRequestHandler
2016-03-18 12:34:01.940 TRACE nova   File 
"/usr/local/lib/python2.7/dist-packages/websockify/websocketproxy.py", line 
265, in __init__
2016-03-18 12:34:01.940 TRACE nova websocket.WebSocketServer.__init__(self, 
RequestHandlerClass, *args, **kwargs)
2016-03-18 12:34:01.940 TRACE nova   File 
"/usr/local/lib/python2.7/dist-packages/websockify/websocket.py", line 647, in 
__init__
2016-03-18 12:34:01.940 TRACE nova self.record = os.path.abspath(record)
2016-03-18 12:34:01.940 TRACE nova   File "/usr/lib/python2.7/posixpath.py", 
line 367, in abspath
2016-03-18 12:34:01.940 TRACE nova if not isabs(path):
2016-03-18 12:34:01.940 TRACE nova   File "/usr/lib/python2.7/posixpath.py", 
line 61, in isabs
2016-03-18 12:34:01.940 TRACE nova return s.startswith('/')
2016-03-18 12:34:01.940 TRACE nova AttributeError: 'bool' object has no 
attribute 'startswith'
2016-03-18 12:34:01.940 TRACE nova 


The 'record' argument should be string, not boolean.

** Affects: nova
 Importance: Low
 Assignee: Radoslav Gerganov (rgerganov)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1559026

Title:
  nova-novncproxy doesn't start when 'record' is specified

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  $ /usr/local/bin/nova-novncproxy --record --config-file
  /etc/nova/nova.conf --web /opt/stack/noVNC

  2016-03-18 12:34:01.940 CRITICAL nova [-] AttributeError: 'bool'
  object has no attribute 'startswith'

  2016-03-18 12:34:01.940 TRACE nova Traceback (most recent call last):
  2016-03-18 12:34:01.940 TRACE nova   File "/usr/local/bin/nova-novncproxy", 
line 10, in 
  2016-03-18 12:34:01.940 TRACE nova sys.exit(main())
  2016-03-18 12:34:01.940 TRACE nova   File 
"/opt/stack/nova/nova/cmd/novncproxy.py", line 39, in main
  2016-03-18 12:34:01.940 TRACE nova port=CONF.vnc.novncproxy_port)
  2016-03-18 12:34:01.940 TRACE nova   File 
"/opt/stack/nova/nova/cmd/baseproxy.py", line 73, in proxy
  2016-03-18 12:34:01.940 TRACE nova 
RequestHandlerClass=websocketproxy.NovaProxyRequestHandler
  2016-03-18 12:34:01.940 TRACE nova   File 
"/usr/local/lib/python2.7/dist-packages/websockify/websocketproxy.py", line 
265, in __init__
  2016-03-18 12:34:01.940 TRACE nova 
websocket.WebSocketServer.__init__(self, RequestHandlerClass, *args, **kwargs)
  2016-03-18 12:34:01.940 TRACE nova   File 
"/usr/local/lib/python2.7/dist-packages/websockify/websocket.py", line 647, in 
__init__
  2016-03-18 12:34:01.940 TRACE nova self.record = os.path.abspath(record)
  2016-03-18 12:34:01.940 TRACE nova   File "/usr/lib/python2.7/posixpath.py", 
line 367, in abspath
  2016-03-18 12:34:01.940 TRACE nova if not isabs(path):
  2016-03-18 12:34:01.940 TRACE nova   File "/usr/lib/python2.7/posixpath.py", 
line 61, in isabs
  2016-03-18 12:34:01.940 TRACE nova return s.startswith('/')
  2016-03-18 12:34:01.940 TRACE nova AttributeError: 'bool' object has no 
attribute 'startswith'
  2016-03-18 12:34:01.940 TRACE nova 

  
  The 'record' argument should be string, not boolean.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1559026/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546454] [NEW] VMware: NFC lease has to be updated when transferring streamOpt images

2016-02-17 Thread Radoslav Gerganov
Public bug reported:

Booting large streamOptimized images (>2GB) fails because the NFC lease
is not updated. This causes the lease to timeout  and kill the image
transfer. The fix is to call update_progress() method every 60sec. This
is also an opportunity to refactor the image transfer code and make it
simpler.

** Affects: nova
 Importance: High
 Assignee: Radoslav Gerganov (rgerganov)
 Status: In Progress


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1546454

Title:
  VMware: NFC lease has to be updated when transferring streamOpt images

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Booting large streamOptimized images (>2GB) fails because the NFC
  lease is not updated. This causes the lease to timeout  and kill the
  image transfer. The fix is to call update_progress() method every
  60sec. This is also an opportunity to refactor the image transfer code
  and make it simpler.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1546454/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1286405] Re: Unable to boot VM using image exported from Virtual Center

2015-10-06 Thread Radoslav Gerganov
Support for OVA images has been added in the Kilo release:

https://blueprints.launchpad.net/nova/+spec/vmware-driver-ova-support

** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1286405

Title:
  Unable to boot VM using image exported from Virtual Center

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Steps to reproduce:
  1. From Virtual Center, export a working VM as an OVA file (test.ova)
  2. Upload OVA file to glance using:
  glance image-create --name=testova --disk-format=vmdk --container-format=ovf 
--is-public=True --property vmware_adaptertype="ide" --property 
vmware_disktype="preallocated" < testimages/test.ova

  
  actual result:
  The boot appears to succeed and the VM appears running, but the console shows 
"Operating system not found".
  Also:
  - I have tried with --property vmware_disktype="sparse" and the result is the 
same
  - Trying with --container-format="ova" gives:
  400 Bad Request
  Invalid container format 'ova' for image.
  (HTTP 400)

  Expected: 
  Able to boot into OS and see the login screen

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1286405/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340564] Re: Very bad performance of concurrent spawning VMs to VCenter

2015-10-06 Thread Radoslav Gerganov
Since we have introduced SPBM policies, this is no longer an issue.
SPBM provides great flexibility for choosing the datastore where the
instance will be placed. See:

http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented
/vmware-spbm-support.html

** No longer affects: oslo.vmware

** Changed in: nova
   Status: Confirmed => Won't Fix

** Changed in: nova
   Importance: High => Undecided

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340564

Title:
  Very bad performance of concurrent spawning VMs to VCenter

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  When 10 user starts to provision VMs to a VCenter, OpenStack chooses one same 
datastore for everyone.
  After the first clone task is complete, OpenStack recognizes that datastore 
space usage is increased, and will choose another datastore. However, all the 
next 9 provision tasks are still performed on the same datastore. If no 
provision task on one datastore completes, OpenStack will persist to choose 
that datastore to spawn next VMs.

  This bug has significant performance impact, because it slows down
  performance of all the provisioning tasks greatly. VCenter driver
  should choose a not busy datastore for the provisioning tasks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1340564/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1383100] Re: Vmware: attach an iscsi volume to instance failed

2015-10-06 Thread Radoslav Gerganov
*** This bug is a duplicate of bug 1386511 ***
https://bugs.launchpad.net/bugs/1386511

This is duplicate of https://bugs.launchpad.net/nova/+bug/1386511 which
is already fixed

** Changed in: nova
   Importance: High => Undecided

** Changed in: nova
   Status: Confirmed => Invalid

** This bug has been marked a duplicate of bug 1386511
   VMWare: attach a iscsi volume to a VirtualIDEController

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1383100

Title:
  Vmware: attach an iscsi volume to instance failed

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  When I try to attach an iscsi volume which created by cinder lvm iscsi
  driver to an instance.  I discovered the following two problems:

  1. In current code base,When attaching an iscsi volume,  it choose the 
adapter type the same way as the attachment of  a VMDK volume :
 def _attach_volume_iscsi(self, connection_info, instance, mountpoint):
  ..
  (vmdk_file_path, adapter_type,
   disk_type) = vm_util.get_vmdk_path_and_adapter_type(hardware_devices)

  self.attach_disk_to_vm(vm_ref, instance,
 adapter_type, 'rdmp',
 device_name=device_name)

  Indeed,   the adapter type should always be "lsiLogicsas".  It is easy
  to  appear an odd scenario that an iscsi volume is attached on an IDE
  adapter.

  
  2. The current code always  choose to rescan the first host's HBA of a 
cluster.

  eg. You have two hosts in a  cluster of a vcenter :  host01  and host
  02.  If you want to attach an iscsi volume to an instance spawned in
  host02.  The attach code should rescan the host02's HBA and discover
  the target. But, In fact the code is always rescan the host01's HBA:

  def _iscsi_rescan_hba(self, target_portal):
  """Rescan the iSCSI HBA to discover iSCSI targets."""
  host_mor = vm_util.get_host_ref(self._session, self._cluster)

  The "host_mor"  always represent the first host.  The following error
  may be produced:

  2014-10-20 10:50:07.917 21540 ERROR oslo.messaging.rpc.dispatcher 
[req-bdf00be9-194f-474d-a61b-5c998c36bdea ] Exception during message handling: 
The virtual disk is either corrupted or not a supported format.
  2014-10-20 10:50:07.917 21540 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-10-20 10:50:07.917 21540 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 134, 
in _dispatch_and_reply
  2014-10-20 10:50:07.917 21540 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-10-20 10:50:07.917 21540 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 177, 
in _dispatch
  2014-10-20 10:50:07.917 21540 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-10-20 10:50:07.917 21540 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 123, 
in _do_dispatch
  2014-10-20 10:50:07.917 21540 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-10-20 10:50:07.917 21540 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 418, in 
decorated_function
  2014-10-20 10:50:07.917 21540 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2014-10-20 10:50:07.917 21540 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/exception.py", line 88, in wrapped
  2014-10-20 10:50:07.917 21540 TRACE oslo.messaging.rpc.dispatcher payload)
  2014-10-20 10:50:07.917 21540 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 82, 
in __exit__
  2014-10-20 10:50:07.917 21540 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-10-20 10:50:07.917 21540 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/exception.py", line 71, in wrapped
  2014-10-20 10:50:07.917 21540 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2014-10-20 10:50:07.917 21540 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 302, in 
decorated_function
  2014-10-20 10:50:07.917 21540 TRACE oslo.messaging.rpc.dispatcher pass
  2014-10-20 10:50:07.917 21540 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 82, 
in __exit__
  2014-10-20 10:50:07.917 21540 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-10-20 10:50:07.917 21540 

[Yahoo-eng-team] [Bug 1470052] Re: instance failed to boot which image disk adapter type is scsi

2015-10-06 Thread Radoslav Gerganov
When you use qemu-img, the resulting VMDK has sparse format. You should
use vmware_disktype="sparse", not vmware_disktype="preallocated"

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1470052

Title:
  instance failed to boot which image disk adapter type is scsi

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  I followed by OpenStack Configuration reference[1], download the cloud 
image[2]
  and convert to vmdk:

  $ qemu-img convert -f qcow2 trusty-server-cloudimg-amd64-disk1.img -O
  vmdk ubuntu.vmdk

  and then use glance image-create command to upload image:

  $ glance image-create --name "ubuntu-thick-scsi" --is-public True 
--disk-format vmdk --container-format bare --property 
vmware_adaptertype="lsiLogic"  \  
 --property vmware_disktype="preallocated" --property 
vmware_ostype="ubuntu64Guest" < ubuntu.vmdk  
  +---+--+  
  | Property  | Value|  
  +---+--+  
  | Property 'vmware_adaptertype' | lsiLogic |  
  | Property 'vmware_disktype'| preallocated |  
  | Property 'vmware_ostype'  | ubuntu64Guest|  
  | checksum  | 676e7fc58d2314db6a264c11804b2d4c |  
  | container_format  | bare |  
  | created_at| 2015-06-26T23:55:36  |  
  | deleted   | False|  
  | deleted_at| None |  
  | disk_format   | vmdk |  
  | id| e79d4815-932b-4be6-b90c-0515f826c615 |  
  | is_public | True |  
  | min_disk  | 0|  
  | min_ram   | 0|  
  | name  | ubuntu-thick-scsi|  
  | owner | 93a022fd03d94b649d0127498e6149cf |  
  | protected | False|  
  | size  | 852230144|  
  | status| active   |  
  | updated_at| 2015-06-26T23:56:39  |  
  | virtual_size  | None |  
  +---+--+ 

  I created an instance in dashboard successful,  But it failed to enter guest 
system.
  I doubt the instance does not have a controller to support scsi disk, when 
using ide , instance runs well. 

  
  [1]http://docs.openstack.org/kilo/config-reference/content/vmware.html
  
[2]http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1470052/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485529] [NEW] The API for getting console connection info works only for RDP

2015-08-17 Thread Radoslav Gerganov
Public bug reported:

There is an API (os-console-auth-tokens) which returns the connection
info which correspond to a given console token.  However this API works
only for RDP consoles:

https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/console_auth_tokens.py#L49

We need the same API for MKS consoles as well.  Also I don't see any
reason why we should check the console type at all.

** Affects: nova
 Importance: Medium
 Assignee: Radoslav Gerganov (rgerganov)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1485529

Title:
  The API for getting console connection info works only for RDP

Status in OpenStack Compute (nova):
  New

Bug description:
  There is an API (os-console-auth-tokens) which returns the connection
  info which correspond to a given console token.  However this API
  works only for RDP consoles:

  
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/console_auth_tokens.py#L49

  We need the same API for MKS consoles as well.  Also I don't see any
  reason why we should check the console type at all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1485529/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276207] Re: vmware driver does not validate server certificates

2015-05-04 Thread Radoslav Gerganov
** Changed in: nova
   Status: Fix Released = Confirmed

** Changed in: nova
 Assignee: (unassigned) = Radoslav Gerganov (rgerganov)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1276207

Title:
  vmware driver does not validate server certificates

Status in Cinder:
  Fix Released
Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo VMware library for OpenStack projects:
  Fix Released

Bug description:
  The VMware driver establishes connections to vCenter over HTTPS, yet
  the vCenter server certificate is not verified as part of the
  connection process.  I know this because my vCenter server is using a
  self-signed certificate which always fails certification verification.
  As a result, someone could use a man-in-the-middle attack to spoof the
  vcenter host to nova.

  The vmware driver has a dependency on Suds, which I believe also does
  not validate certificates because hartsock and I noticed it uses
  urllib.

  For reference, here is a link on secure connections in OpenStack:
  https://wiki.openstack.org/wiki/SecureClientConnections

  Assuming Suds is fixed to provide an option for certificate
  verification, next step would be to modify the vmware driver to
  provide an option to override invalid certificates (such as self-
  signed).  In other parts of OpenStack, there are options to bypass the
  certificate check with a insecure option set, or you could put the
  server's certificate in the CA store.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1276207/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416000] Re: VMware: write error lost while transferring volume

2015-01-29 Thread Radoslav Gerganov
The same problem exists in Nova as we use the same approach for image
transfer:

https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/images.py#L181

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1416000

Title:
  VMware: write error lost while transferring volume

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  I'm running the following command:

  cinder create --image-id a24f216f-9746-418e-97f9-aebd7fa0e25f 1

  The write side of the data transfer (a VMwareHTTPWriteFile object)
  returns an error in write() which I haven't debugged, yet. However,
  this error is never reported to the user, although it does show up in
  the logs. The effect is that the transfer sits in the 'downloading'
  state until the 7200 second timeout, when it reports the timeout.

  The reason is that the code which waits on transfer completion (in
  start_transfer) does:

  try:
  # Wait on the read and write events to signal their end
  read_event.wait()
  write_event.wait()
  except (timeout.Timeout, Exception) as exc:
  ...

  That is, it waits for the read thread to signal completion via
  read_event before checking write_event. However, because write_thread
  has died, read_thread is blocking and will never signal completion.
  You can demonstrate this by swapping the order. If you want for write
  first it will die immediately, which is what you want. However, that's
  not right either because now you're missing read errors.

  Ideally this code needs to be able to notice an error at either end
  and stop immediately.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1416000/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1343359] Re: VMware: the retrieval of Datacenter is incorrect

2014-09-18 Thread Radoslav Gerganov
@Thang Correct, this looks fine now, thanks for looking in.

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1343359

Title:
  VMware: the retrieval of Datacenter is incorrect

Status in OpenStack Compute (Nova):
  Invalid
Status in The OpenStack VMwareAPI subTeam:
  New

Bug description:
  The implementation of get_datacenter_ref_and_name() in vmops.py is
  incorrect -- it simply returns the first Datacenter found instead
  searching for the relevant one.

  We need to return the datacenter which contains the corresponding
  cluster in VMwareVMOps.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1343359/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358772] Re: get_available_datastores - possible issue with datastore accessibility

2014-09-16 Thread Radoslav Gerganov
We don't have host_mor after the removal of the ESX driver. I think this
bug is invalid.

** Changed in: nova
   Status: Triaged = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1358772

Title:
  get_available_datastores - possible issue with datastore accessibility

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Vipin found this issue during a code 'port' from nova to oslo.vmware in 
review 114551:
  https://review.openstack.org/#/c/114551/14/oslo/vmware/selector.py,unified

  
https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/ds_util.py#L303

  Quote from vipin:

  I think there is a problem here.

  Assume that cluster_mor is None and host_mor is h1. If a datastore d1
  is attached to hosts h1 and h2 where it is accessible only to h2 and
  not h1, summary.accessible will be True even though it is not
  accessible to h1.

  We should use HostMountInfo.accessible in this case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1358772/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356449] [NEW] VMware: resize operations fail

2014-08-13 Thread Radoslav Gerganov
Public bug reported:

The driver function get_host_ip_addr() is needed by the resource_tracker
and if it is missing the resize operation fails. This is regression
caused by the deprecation of the ESX driver with commit
1deb31f85a8f5d1e261b2cf1eddc537a5da7f60b

We need to bring back get_host_ip_addr() and return the IP address of
the vCenter server

** Affects: nova
 Importance: Critical
 Assignee: Radoslav Gerganov (rgerganov)
 Status: New


** Tags: vmware

** Changed in: nova
   Importance: Undecided = Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1356449

Title:
  VMware: resize operations fail

Status in OpenStack Compute (Nova):
  New

Bug description:
  The driver function get_host_ip_addr() is needed by the
  resource_tracker and if it is missing the resize operation fails. This
  is regression caused by the deprecation of the ESX driver with commit
  1deb31f85a8f5d1e261b2cf1eddc537a5da7f60b

  We need to bring back get_host_ip_addr() and return the IP address of
  the vCenter server

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1356449/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1343359] [NEW] VMware: the retrieval of Datacenter is incorrect

2014-07-17 Thread Radoslav Gerganov
Public bug reported:

The implementation of get_datacenter_ref_and_name() in vmops.py is
incorrect -- it simply returns the first Datacenter found instead
searching for the relevant one.

We need to return the datacenter which contains the corresponding
cluster in VMwareVMOps.

** Affects: nova
 Importance: High
 Assignee: Radoslav Gerganov (rgerganov)
 Status: New


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1343359

Title:
  VMware: the retrieval of Datacenter is incorrect

Status in OpenStack Compute (Nova):
  New

Bug description:
  The implementation of get_datacenter_ref_and_name() in vmops.py is
  incorrect -- it simply returns the first Datacenter found instead
  searching for the relevant one.

  We need to return the datacenter which contains the corresponding
  cluster in VMwareVMOps.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1343359/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1321797] [NEW] Tempest fails on backports to Icehouse

2014-05-21 Thread Radoslav Gerganov
Public bug reported:

There are two patches that backport fixes to stable/icehouse and they
both fail the same way on
tempest.api.identity.admin.v3.test_certificates.*

https://review.openstack.org/#/c/94406/
https://review.openstack.org/#/c/90809/

Tracebacks:

Captured traceback:
~~~
Traceback (most recent call last):
  File tempest/api/identity/admin/v3/test_certificates.py, line 33, in 
test_get_ca_certificate
resp, certificate = self.client.get_ca_certificate()
  File tempest/services/identity/v3/json/identity_client.py, line 455, in 
get_ca_certificate
resp, body = self.get(OS-SIMPLE-CERT/ca)
  File tempest/common/rest_client.py, line 212, in get
return self.request('GET', url, extra_headers, headers)
  File tempest/common/rest_client.py, line 410, in request
resp, resp_body)
  File tempest/common/rest_client.py, line 454, in _error_checker
raise exceptions.NotFound(resp_body)
NotFound: Object not found
Details: {error: {message: The resource could not be found., code: 
404, title: Not Found}}

Captured pythonlogging:
~~~
2014-05-20 18:43:00,921 Request 
(CertificatesV3TestJSON:test_get_ca_certificate): 200 POST 
http://127.0.0.1:5000/v2.0/tokens
2014-05-20 18:43:00,955 Request 
(CertificatesV3TestJSON:test_get_ca_certificate): 404 GET 
http://127.0.0.1:35357/v3/OS-SIMPLE-CERT/ca 0.032s

{0} 
tempest.api.identity.admin.v3.test_certificates.CertificatesV3TestJSON.test_get_certificates
 [0.008026s] ... FAILED

Captured traceback:
~~~
Traceback (most recent call last):
  File tempest/api/identity/admin/v3/test_certificates.py, line 41, in 
test_get_certificates
resp, certificates = self.client.get_certificates()
  File tempest/services/identity/v3/json/identity_client.py, line 460, in 
get_certificates
resp, body = self.get(OS-SIMPLE-CERT/certificates)
  File tempest/common/rest_client.py, line 212, in get
return self.request('GET', url, extra_headers, headers)
  File tempest/common/rest_client.py, line 410, in request
resp, resp_body)
  File tempest/common/rest_client.py, line 454, in _error_checker
raise exceptions.NotFound(resp_body)
NotFound: Object not found
Details: {error: {message: The resource could not be found., code: 
404, title: Not Found}}

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1321797

Title:
  Tempest fails on backports to Icehouse

Status in OpenStack Identity (Keystone):
  New

Bug description:
  There are two patches that backport fixes to stable/icehouse and they
  both fail the same way on
  tempest.api.identity.admin.v3.test_certificates.*

  https://review.openstack.org/#/c/94406/
  https://review.openstack.org/#/c/90809/

  Tracebacks:

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File tempest/api/identity/admin/v3/test_certificates.py, line 33, in 
test_get_ca_certificate
  resp, certificate = self.client.get_ca_certificate()
File tempest/services/identity/v3/json/identity_client.py, line 455, 
in get_ca_certificate
  resp, body = self.get(OS-SIMPLE-CERT/ca)
File tempest/common/rest_client.py, line 212, in get
  return self.request('GET', url, extra_headers, headers)
File tempest/common/rest_client.py, line 410, in request
  resp, resp_body)
File tempest/common/rest_client.py, line 454, in _error_checker
  raise exceptions.NotFound(resp_body)
  NotFound: Object not found
  Details: {error: {message: The resource could not be found., 
code: 404, title: Not Found}}
  
  Captured pythonlogging:
  ~~~
  2014-05-20 18:43:00,921 Request 
(CertificatesV3TestJSON:test_get_ca_certificate): 200 POST 
http://127.0.0.1:5000/v2.0/tokens
  2014-05-20 18:43:00,955 Request 
(CertificatesV3TestJSON:test_get_ca_certificate): 404 GET 
http://127.0.0.1:35357/v3/OS-SIMPLE-CERT/ca 0.032s
  
  {0} 
tempest.api.identity.admin.v3.test_certificates.CertificatesV3TestJSON.test_get_certificates
 [0.008026s] ... FAILED

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File tempest/api/identity/admin/v3/test_certificates.py, line 41, in 
test_get_certificates
  resp, certificates = self.client.get_certificates()
File tempest/services/identity/v3/json/identity_client.py, line 460, 
in get_certificates
  resp, body = self.get(OS-SIMPLE-CERT/certificates)
File tempest/common/rest_client.py, line 212, in get
  return self.request('GET', url, extra_headers, headers)
File tempest/common/rest_client.py, line 410, in request
  resp, resp_body)
File 

[Yahoo-eng-team] [Bug 1317912] [NEW] VMware: get_info() fails if properties are missing

2014-05-09 Thread Radoslav Gerganov
 nova.openstack.common.vmware.api [-]
Logging out and terminating the current session with ID = 526ac461-2770
-40fa-a53e-7fe742d2499a.

2014-05-06 21:52:44.903 421 DEBUG nova.openstack.common.vmware.vim [-]
Invoking Logout on (sessionManager){

   value = SessionManager

** Affects: nova
 Importance: Low
 Assignee: Radoslav Gerganov (rgerganov)
 Status: New


** Tags: vmware

** Changed in: nova
   Importance: Undecided = Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1317912

Title:
  VMware: get_info() fails if properties are missing

Status in OpenStack Compute (Nova):
  New

Bug description:
  The properties that we retrieve from VirtualMachineConfigSummary are
  optional and if they are missing get_info() throws an exception
  (tracebacks below). We should use default values  when the properties
  are not available.

  
  stack/common/vmware/vim.py:126

  2014-05-06 21:52:44.485 421 DEBUG nova.openstack.common.vmware.vim [-]
  No faults found in RetrievePropertiesEx API response.
  _retrieve_properties_ex_fault_checker /usr/lib/python2.7/dist-
  packages/nova/ope

  nstack/common/vmware/vim.py:153

  2014-05-06 21:52:44.486 421 DEBUG nova.openstack.common.vmware.vim [-]
  Invocation of RetrievePropertiesEx on (propertyCollector){

 value = propertyCollector

 _type = PropertyCollector

   } completed successfully. vim_request_handler /usr/lib/python2.7
  /dist-packages/nova/openstack/common/vmware/vim.py:187

  2014-05-06 21:52:44.487 421 DEBUG nova.openstack.common.vmware.api [-]
  Function _invoke_api returned successfully after 0 retries. _func
  /usr/lib/python2.7/dist-
  packages/nova/openstack/common/vmware/api.py:88

  2014-05-06 21:52:44.491 421 ERROR nova.openstack.common.threadgroup
  [-] int() argument must be a string or a number, not 'NoneType'

  2014-05-06 21:52:44.491 421 TRACE nova.openstack.common.threadgroup
  Traceback (most recent call last):

  2014-05-06 21:52:44.491 421 TRACE nova.openstack.common.threadgroup
  File /usr/lib/python2.7/dist-
  packages/nova/openstack/common/threadgroup.py, line 117, in wait

  2014-05-06 21:52:44.491 421 TRACE nova.openstack.common.threadgroup
  x.wait()

  2014-05-06 21:52:44.491 421 TRACE nova.openstack.common.threadgroup
  File /usr/lib/python2.7/dist-
  packages/nova/openstack/common/threadgroup.py, line 49, in wait

  2014-05-06 21:52:44.491 421 TRACE nova.openstack.common.threadgroup
  return self.thread.wait()

  2014-05-06 21:52:44.491 421 TRACE nova.openstack.common.threadgroup
  File /usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line
  168, in wait

  2014-05-06 21:52:44.491 421 TRACE nova.openstack.common.threadgroup
  return self._exit_event.wait()

  2014-05-06 21:52:44.491 421 TRACE nova.openstack.common.threadgroup
  File /usr/lib/python2.7/dist-packages/eventlet/event.py, line 116,
  in wait

  2014-05-06 21:52:44.491 421 TRACE nova.openstack.common.threadgroup
  return hubs.get_hub().switch()

  2014-05-06 21:52:44.491 421 TRACE nova.openstack.common.threadgroup
  File /usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line
  187, in switch

  2014-05-06 21:52:44.491 421 TRACE nova.openstack.common.threadgroup
  return self.greenlet.switch()

  2014-05-06 21:52:44.491 421 TRACE nova.openstack.common.threadgroup
  File /usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line
  194, in main

  2014-05-06 21:52:44.491 421 TRACE nova.openstack.common.threadgroup
  result = function(*args, **kwargs)

  2014-05-06 21:52:44.491 421 TRACE nova.openstack.common.threadgroup
  File /usr/lib/python2.7/dist-
  packages/nova/openstack/common/service.py, line 65, in run_service

  2014-05-06 21:52:44.491 421 TRACE nova.openstack.common.threadgroup
  service.start()

  2014-05-06 21:52:44.491 421 TRACE nova.openstack.common.threadgroup
  File /usr/lib/python2.7/dist-packages/nova/service.py, line 154, in
  start

  2014-05-06 21:52:44.491 421 TRACE nova.openstack.common.threadgroup
  self.manager.init_host()

  2014-05-06 21:52:44.491 421 TRACE nova.openstack.common.threadgroup
  File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line
  792, in init_host

  2014-05-06 21:52:44.491 421 TRACE nova.openstack.common.threadgroup
  self._init_instance(context, instance)

  2014-05-06 21:52:44.491 421 TRACE nova.openstack.common.threadgroup
  File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line
  700, in _init_instance

  2014-05-06 21:52:44.491 421 TRACE nova.openstack.common.threadgroup
  drv_state = self._get_power_state(context, instance)

  2014-05-06 21:52:44.491 421 TRACE nova.openstack.common.threadgroup
  File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line
  808, in _get_power_state

  2014-05-06 21:52:44.491 421 TRACE nova.openstack.common.threadgroup
  return self.driver.get_info(instance)[state]

  2014-05-06 21:52:44.491

[Yahoo-eng-team] [Bug 1284658] [NEW] VMware: refactor how we iterate result objects from vCenter

2014-02-25 Thread Radoslav Gerganov
Public bug reported:

There is lot of duplicate code which does the following (pseudo code):

result = session.get_objects_from_vcenter()
while result:
do_something(result)
token = get_token(result)
if token:
result = session.continue_to_get_objects(token)
else:
break

The part that retrieves more objects if token is returned is repeated
over and over again. We need to come up with a common utility (e.g. an
iterator) which abstracts this boilerplate and then have something like:

for result in session.get_objects():
do_something_with_result(result)

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1284658

Title:
  VMware: refactor how we iterate result objects from vCenter

Status in OpenStack Compute (Nova):
  New

Bug description:
  There is lot of duplicate code which does the following (pseudo code):

  result = session.get_objects_from_vcenter()
  while result:
  do_something(result)
  token = get_token(result)
  if token:
  result = session.continue_to_get_objects(token)
  else:
  break

  The part that retrieves more objects if token is returned is repeated
  over and over again. We need to come up with a common utility (e.g. an
  iterator) which abstracts this boilerplate and then have something
  like:

  for result in session.get_objects():
  do_something_with_result(result)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1284658/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1275822] [NEW] VMware: redundant VC call after VM creation

2014-02-03 Thread Radoslav Gerganov
Public bug reported:

We have the following code in the spawn() method in vmops.py:

def spawn(...):
...
def _execute_create_vm():
vm_create_task = self._session._call_method(...)
self._session._wait_for_task(instance['uuid'], vm_create_task)

_execute_create_vm()
vm_ref = vm_util.get_vm_ref_from_name(self._session, instance_name)
...

get_vm_ref_from_name() is making remote call which is redundant because
we can obtain a reference to the created VM from the CreateVM task
itself.  From the the vSphere documentation:

This method returns a Task object with which to monitor the operation.
The info.result property in the Task contains the newly created
VirtualMachine upon success.

We should fix _execute_create_vm() to get the VM from the task and
return it.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1275822

Title:
  VMware: redundant VC call after VM creation

Status in OpenStack Compute (Nova):
  New

Bug description:
  We have the following code in the spawn() method in vmops.py:

  def spawn(...):
  ...
  def _execute_create_vm():
  vm_create_task = self._session._call_method(...)
  self._session._wait_for_task(instance['uuid'], vm_create_task)

  _execute_create_vm()
  vm_ref = vm_util.get_vm_ref_from_name(self._session, instance_name)
  ...

  get_vm_ref_from_name() is making remote call which is redundant
  because we can obtain a reference to the created VM from the CreateVM
  task itself.  From the the vSphere documentation:

  This method returns a Task object with which to monitor the
  operation. The info.result property in the Task contains the newly
  created VirtualMachine upon success.

  We should fix _execute_create_vm() to get the VM from the task and
  return it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1275822/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259981] [NEW] VMware: factor out the management of unit numbers

2013-12-11 Thread Radoslav Gerganov
Public bug reported:

Virtual devices need a unit number when they are attached to a
controller. We cannot have two devices on the same controller with the
same unit number.

Currently, the selection of unit numbers is spread all over the driver
code, leaking to high-level functions like spawn() and rescue(). We need
to factor this out into helper functions which take care of choosing a
proper unit number and creating additional controllers if needed.

High-level functions need to communicate only the intent like 'attach
CDROM' or 'attach disk' and shouldn't bother with details like unit
numbers.

** Affects: nova
 Importance: Undecided
 Assignee: Radoslav Gerganov (rgerganov)
 Status: New


** Tags: vmware

** Changed in: nova
 Assignee: (unassigned) = Radoslav Gerganov (rgerganov)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1259981

Title:
  VMware: factor out the management of unit numbers

Status in OpenStack Compute (Nova):
  New

Bug description:
  Virtual devices need a unit number when they are attached to a
  controller. We cannot have two devices on the same controller with the
  same unit number.

  Currently, the selection of unit numbers is spread all over the driver
  code, leaking to high-level functions like spawn() and rescue(). We
  need to factor this out into helper functions which take care of
  choosing a proper unit number and creating additional controllers if
  needed.

  High-level functions need to communicate only the intent like 'attach
  CDROM' or 'attach disk' and shouldn't bother with details like unit
  numbers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1259981/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257726] [NEW] VMware: refactor volumeops._get_volume_uuid()

2013-12-04 Thread Radoslav Gerganov
Public bug reported:

Recently I have been doing some queries for extraConfig VM options and
found that the most efficient way to retrieve a given property is to do:

session._call_method(vim_util, 'get_dynamic_property', vm_ref,
'VirtualMachine', 'config.extraConfig[some_prop_here]')

Right now we ask for all extraConfig options and then we iterate over
the result set to find a particular one.

** Affects: nova
 Importance: Undecided
 Assignee: Radoslav Gerganov (rgerganov)
 Status: New


** Tags: vmware

** Changed in: nova
 Assignee: (unassigned) = Radoslav Gerganov (rgerganov)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1257726

Title:
  VMware: refactor volumeops._get_volume_uuid()

Status in OpenStack Compute (Nova):
  New

Bug description:
  Recently I have been doing some queries for extraConfig VM options and
  found that the most efficient way to retrieve a given property is to
  do:

  session._call_method(vim_util, 'get_dynamic_property', vm_ref,
  'VirtualMachine', 'config.extraConfig[some_prop_here]')

  Right now we ask for all extraConfig options and then we iterate over
  the result set to find a particular one.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1257726/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp