Well, here goes one possible explanation. If I had to bet, I would bet on
this one, and not on some chunk of code that might be synchronized.
When you use the destroy command, first the ACS stops the VM. The stop
process is the one that can be slow. The OS of the VM might have taken time
long
The stop operation seems to be as quick as usual. Again, we don’t have slow
destroy on all VMs. It occurred twice in a short time frame but we didn’t
experience it since then. I just want to understand the root cause to see if
the management server performance was at fault or if it’s a
If you just use the stop option? Is it taking a long time too?
On Wed, Apr 13, 2016 at 10:37 AM, Simon Godard wrote:
> We are using XenServer 6.2.
>
> Most VM destroy (expunge=true) are fairly quick. Is there anything else I
> could be looking for? At the time of the slow
We are using XenServer 6.2.
Most VM destroy (expunge=true) are fairly quick. Is there anything else I could
be looking for? At the time of the slow destroy, there weren’t a very high
number of async jobs ongoing. I suspect it could be related to a DB concurrency
issue, looking at this log I
What hypervisor are you using?
Every single VM in your environment is presenting this behavior?
On Wed, Apr 13, 2016 at 10:18 AM, Simon Godard wrote:
> Hi,
>
> I am trying to understand why a destroyVirtualMachine API call would take
> around 1 hour to get a successful
Hi,
I am trying to understand why a destroyVirtualMachine API call would take
around 1 hour to get a successful async job result. From CloudStack log, I
can see that the StopVmCmd occurred right away, but the DestroyVmCmd took 1
hour to complete.
Do you know what could cause such delays?
The