Yes i can open JIRA tickets. What would you like for me to do?
I'll be happy to change the "wait" parameter. Do I assume it should be 1/2 of
the value i want it to be?
From: Rafael Weingärtner [rafaelweingart...@gmail.com]
Sent: Monday, October 12, 20
There is your problem, there are currently two distinct values conrolling
those async jobs.
Change that value and everything will work for u.
Can you open a jira ticket?
On Mon, Oct 12, 2015 at 11:51 PM, Ryan Farrington wrote:
> wait is currently configured to be 3600
>
>
>
> ___
wait is currently configured to be 3600
From: Rafael Weingärtner [rafaelweingart...@gmail.com]
Sent: Monday, October 12, 2015 9:46 PM
To: users@cloudstack.apache.org
Subject: [Questionable] Re: Timeout with live migration
I found something odd,
can you
I found something odd,
can you check the parameter called "wait", what value is it using ?
On Mon, Oct 12, 2015 at 10:54 PM, Ryan Farrington wrote:
> Yes the parameter was set long ago and the management server has been
> restarted numerous time over the past few days as we played with other
> p
Yes the parameter was set long ago and the management server has been restarted
numerous time over the past few days as we played with other parameters to no
effect.
After looking at the log a little more does the "Failed to send command, due to
Agent:38, com.cloud.exception.OperationTimedout
I thought you using the command “migrateVirtualMachineWithVolume” but it
seems that you are using “migrateVolume” command from ACS's API.
For the code I debugged “migrateVirtualMachineWithVolume”, the parameter
3600, means 1 hour of timeout.
For the “migrateVolume” is the same, they both end up
Here is the full log, including the stack for the exception, that we get at the
2 hour mark. as for the migratewait it is set to 36000 which should be 10
hours.
2015-10-12 18:41:20,137 DEBUG [c.c.a.m.DirectAgentAttache]
(DirectAgent-323:ctx-6d42edd7) Seq 31-1023875267: Executing request
2015-1
Now I understand what you are doing, I am familiar with that concept (live
migration of VM within a cluster, having the VHD being moved from one SR to
another).
I just got confused when I read live migration of volumes (a volume does
not run by itself, so that why I asked a little for some more in
Hypervisor: XenServer
We are moving a data volume from one storage onto another without shutting down
the VM cause that would just be silly and a triplication of effort with the
whole copying to secondary storage and then back off again. The volume is
staying in the same cluster just moving to
what do you mean with livre migrating data volume ?!
I understand a live migration of a VM, but volumes...
do you mean live migrating a VM that has a volume attached?
are you migrating that volume to a different cluster? or just a different
storage in the same cluster?
What hypervisor are you usin
Live migrating a data volume. We are purely on shared storage so no local
storage is involved.
From: Rafael Weingärtner [rafaelweingart...@gmail.com]
Sent: Monday, October 12, 2015 7:37 PM
To: users@cloudstack.apache.org
Subject: [Questionable] Re: Time
Are you live migrating a VM, or migrating a volume of a stopped VM to a
different primary storage?
If it is a running VM, is the VM allocated in a shared storage or local
storage?
On Mon, Oct 12, 2015 at 9:17 PM, Ryan Farrington
wrote:
> The slow transfer is related to the storage we are trying
The slow transfer is related to the storage we are trying to migrate off of.
We are capable of getting about 350mbps off the disks but when we are moving
volumes that are greater than about 500GB we end up racing the clock and hoping
that the migration finishes before the job times out. It wo
We are currently on version 4.3.0. Hypervisor is XenServer.None of the
settings are set to 7200 seconds (or any variation that would yield 7200
seconds) but i have provided them below as a reference. Is there any other
place where 7200 might be hard coded? We are planning on an upgrade t
I would first check your NICs' speed and load, the amount of RAM allocated
for the migrating VM and than check the hypervisor log files.
On Mon, Oct 12, 2015 at 8:19 PM, Jan-Arve Nygård
wrote:
> What version are you running? Check if the copy.volume.wait setting is set
> to 7200 and increase it.
What version are you running? Check if the copy.volume.wait setting is set
to 7200 and increase it. If not you could also check
job.cancel.threshold.minutes and job.expire.minutes.
-Jan-Arve
2015-10-13 0:46 GMT+02:00 Ryan Farrington :
> We are experiencing a failure in cloudstack waiting for an
We are experiencing a failure in cloudstack waiting for an async job performing
a live migration of a volume to finish. I've copied the relevant log entries
below.We acknowledge that the migration will take a few hours based on the
volume of the data and we are looking for a way to increase the
Hi folks,
CCC Dublin 2015 is over and we had a blast.
Thanks to our sponsors who helped make this happen:
Citrix, Cloud Foundry foundation, Nuage Networks, Shapeblue, Cloudian,
Solidfire, ikoula, LPI-Japan, PC Extreme and Cloud Ops.
A few take aways:
1-We had 150 people at the event, which is
Andrei,
Open bug at https://issues.apache.org/jira/browse/cloudstack.
Regards,
Vadim.
On 2015-10-12 00:53, Andrei Mikhailovsky wrote:
Hi guys,
was wondering if you've seen the same behaviour as I am currently
experiencing? I've set a recurring volume snapshot to take place every
nigh
> On 10 Oct 2015, at 12:35, Remi Bergsma wrote:
>
> Can you please explain what the issue is with KVM HA? In my tests, HA starts
> all VMs just fine without the hypervisor coming back. At least that is on
> current 4.6. Assuming a cluster of multiple nodes of course. It will then do
> a neigh
20 matches
Mail list logo