Piotr,
In past I have edited the DB to change VM ownership from A account to B
account and attach VM to network of B account. Worked fine. I think both
account were part of different domain. Editing proper DB might help. Just
study a bit, backup DB and test it on one of test VMs.
--
Makrand
O
Hey,
Is it possible to transfer the VM from the project to the overall pool?
Regards,
Piotr
Thank you, Makrand.
On 8/27/2018 11:11 AM, Makrand wrote:
Hi Asai,
The Server offering with HA enabled will do trick. While launching the VM
just choose this SO. In case your previous SO was not ha enabled (and thus
VM) you can actually change the SO and relaunch VM. Just test it before on
one
Hi Asai,
The Server offering with HA enabled will do trick. While launching the VM
just choose this SO. In case your previous SO was not ha enabled (and thus
VM) you can actually change the SO and relaunch VM. Just test it before on
one of the VMs.
The VMs with HA enabled will come back on its ow
Both ways work Makrand - but I suggest you use the "restart with cleanup"
option. In 4.11 you will also find this option does a parallel startup /
shutdown of old/new VR, leading to less VR downtime.
Regards,
Dag Sonstebo
On 27/08/2018, 18:38, "Makrand" wrote:
Hi Dag,
For VRs d
OK, thanks, Dag.
I think I finally get it.
On 8/27/2018 2:47 AM, Dag Sonstebo wrote:
Hi Asai,
In the context of CloudStack your metadata is effectively in the CloudStack DB. If you want to
capture the point-in-time settings for the VMs in question you would simply do a "virsh
dumpxml " again
Hi Dag,
For VRs do destroy works in same fashion as of system VMs or just doing
clean network restart is only option for spinning new VR?
--
Makrand
On Mon, Aug 27, 2018 at 7:32 PM, Swen - swen.io wrote:
> Hi Alessandro,
>
> I can confirm Dag's way. We did this a few weeks ago and it worked
>
Hi Alessandro,
I can confirm Dag's way. We did this a few weeks ago and it worked perfectly.
You do have a short downtime, because the VR will be recreated.
Cu Swen
-Ursprüngliche Nachricht-
Von: Dag Sonstebo
Gesendet: Montag, 27. August 2018 15:41
An: users@cloudstack.apache.org
Betr
Hi Allessandro,
You can just *disable (as oppose to maintenance mode)* the hosts in the current
cluster, then destroy the system VMs - CloudStack will recreate them where it
can - i.e. on the other cluster where hosts are enabled. Please note disabling
a host just prevents VMs from starting on
Hi guys,
is there a way to migrate System VM and VR to another cluster?
I'm using XenServer with ACS4.9
I know that the only way is to put the cluster in maintenance mode but I
don't want to migrate existing instance, just VR and SVM.
Thank yuo
CloudStack has a parameter for that. However, I think that you might be
able to set the page size via API as well during your command calls.
On Mon, Aug 27, 2018 at 5:04 AM, Jan-Arve Nygård
wrote:
> Hi,
>
> I seem to hit a limit while listing all volumes from the API with
> cloudmonkey as it onl
Hi Asai,
In the context of CloudStack your metadata is effectively in the CloudStack DB.
If you want to capture the point-in-time settings for the VMs in question you
would simply do a "virsh dumpxml " against the VM and capture this data
somehow. Keep in mind though it's not going to be partic
Hi,
I seem to hit a limit while listing all volumes from the API with
cloudmonkey as it only lists the first 500 with the listall=true flag.
Does anyone know if there's a limit setting in either Cloudstack og
Cloudmonkey that i could be hitting? I had a quick search for both but
couldn't seem to
13 matches
Mail list logo