WIth an engine on bare-metal, and highly specced bare-metal, I find it
hard to imagine a scenario where it would be too overwhelming for the
engine to handle.
If you are that worried, I would block it into logical areas, either by
City, DC, Cluster etc. But unless you are talking about an inte
It is not a good idea.
If you think that the engine can't cope with the load, add a separate engine
for the specific cluster(s).
What is the count of Datacenters, Clusters, Hosts, VMs & Storage domains ?
Best Regards,
Strahil NikolovOn Jan 8, 2020 03:51, yam yam wrote:
>
> Hello,
>
> given mas
Hello,
given massive oVirt environment, I think single engine looks too small to deal
with all workloads.
so, I want to make active-active engine cluster for distributing workloads.
is it possible for an oVirt environment to be made up of multiple engines & DBs
for load balancing?
__
Simone, ovirt-users,
I'm running into an issue with the ovirt.hosted_engine_setup role. I can't
decide if my problem is 1) that the SSH connection to the HostedEngineLocal
VM repeatedly fails due to that new VM not having an entry in
~/.ssh/known_hosts or 2) that I don't have some variable define
Then ... It's a bug and shoiuld be reported.
Best Regards,
Strahil NikolovOn Jan 7, 2020 16:00, Chris Adams wrote:
>
> Once upon a time, m.skrzetu...@gmail.com said:
> > I'd give up on the ISO domain. I started like you and then read the docs
> > which said that ISO domain is deprecated.
> >
On Wed, Dec 18, 2019 at 7:28 PM Nathanaƫl Blanchet wrote:
> Hello Ondra what do you think about this question?
> ovirt4.py may need some modifications to get required IPs/hostnames when
> multiple vm has multiple interfaces ?
> Personalized hack works for me but I have to modify the file each tim
Thanks for the reply. I looked at the VDSM logs and found these entries:
2020-01-07 08:10:43,107-0600 INFO (vmrecovery) [vds] recovery: waiting for
storage pool to go up (clientIF:709)
2020-01-07 08:10:48,112-0600 INFO (vmrecovery) [vdsm.api] START
getConnectedStoragePoolsList(options=None) f
Once upon a time, m.skrzetu...@gmail.com said:
> I'd give up on the ISO domain. I started like you and then read the docs
> which said that ISO domain is deprecated.
> I'd upload all files to a data domain.
Note that that only works if your data domain is NFS... iSCSI data
domains will let you u
Hey folks,
- theoretical question, no live data in jeopardy -
Let's say a 3-way HCI cluster is up and running, with engine running,
all is well. The setup was done via gui, including gluster.
Now I would kill a host, poweroff & disk wipe. Simulating a full node
failure.
The remaining nod
On Tue, Jan 7, 2020 at 9:21 AM Christian Reiss
wrote:
> Hey,
>
> thanks that solves this issue.
> Most documentation I find is marked as "deprecated", the storage
> documenation does not clearly reflect this.
>
> Anway,
> thanks for clearing this issue!
>
>
See here for the "sort of confirmation"
Hello,
creating a VM with QCOW2 disk is as easy as that. The only requirement to
run this is having QCOW2 disk named "centos7" in your engine:
---
- hosts: localhost
gather_facts: false
tasks:
- name: Obtain SSO token
ovirt_auth:
url: https:///ovirt-engine/api
usern
Hey,
A college already re-installed the cluster for new tests, so I am unable
to look this up right now. I will repeat this test afterwards.
Thanks to everyone replying.
-Chris.
On 06/01/2020 21:22, Strahil Nikolov wrote:
Do you see the gluster volume under command line ?
gluster volume list
Hey,
thanks that solves this issue.
Most documentation I find is marked as "deprecated", the storage
documenation does not clearly reflect this.
Anway,
thanks for clearing this issue!
On 06/01/2020 21:25, Strahil Nikolov wrote:
/ISO domains are deprecated./
/Just upload it to the data domain
13 matches
Mail list logo