Hi guys, last week we tried to install cloudstack 4.5.1 with two kvm nodes
defining an advanced zone with gre isolation.
Ovs never appears on network service providers but guests vlan are
automatically created on ovs.
Since gre tunnel between nodes has not automatically generated , we set up
it ma
Hello all,
do you have any idea why all my hosts are to avoid and how can I remove this.
Your help is very appreciated.
Regards,
Ingo
2015-05-30 23:05:56,448 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
(Work-Job-Executor-113:ctx-8f509c78 job-2236/job-2238 ctx-50d9961e
FirstFitRoutingAllocator) Look
Please send log lines before the current lines...
On 30 May 2015 at 23:34, Jochim, Ingo wrote:
> Hello all,
>
> do you have any idea why all my hosts are to avoid and how can I remove
> this.
>
> Your help is very appreciated.
>
> Regards,
> Ingo
>
> 2015-05-30 23:05:56,448 DEBUG [c.c.a.m.a.i.Fi
Hello Andrija,
here are the previous lines.
Thanks,
Ingo
2015-05-30 23:05:45,006 DEBUG [c.c.c.CapacityManagerImpl]
(Work-Job-Executor-113:ctx-8f509c78 job-2236/job-2238 ctx-50d9961e
FirstFitRoutingAllocator) Host has enough CPU and RAM available
2015-05-30 23:05:45,006 DEBUG [c.c.c.CapacityM
2015-05-30 23:05:56,115 INFO [c.c.v.VirtualMachineManagerImpl]
(Work-Job-Executor-113:ctx-8f509c78 job-2236/job-2238 ctx-50d9961e) Unable
to start VM on Host[-25-Routing] due to internal error: Only 1 ide
controller is supported
cant be 100% sure, but that seem sto be problem...
Also search in a
Hi,
I would have a question on database HA feature in db.properties (
http://cloudstack-administration.readthedocs.org/en/latest/reliability.html#configuring-database-high-availability
)
If I understand correctly, it is up to the admin to provide appropriate
mysql HA (active-active, galera, etc)
Hi Andrija,
there should be plenty of free CPU and RAM.
This VM was on a day ago. The only thing I did was migrating the disk from NFS
to Ceph storage.
The hosts are all up.
Is there anything I can check?
Regards,
Ingo
Von: Andrija Panic [andrija.pa...@
well, send whole log (on pastebin.com) so we can check...
migrating from storage to storage, can result in various errors :) (related
to disk offerings, if you experimented, tagged them, etc)
was the VM runing on CEPH at all ?
do you have librbd installed on KVM nodes ? etc..
On 31 May 2015 at 00
Hi Andrija,
I'm able to run clients using ceph on all my hypervisors and can live migrate
them between all of them.
So I think librados should work in general.
I can also power on other clients. Capacity on the hypervisors should not be
the problem.
Here are parts of my large logfile:
https:/