To clarify:
My screenshot keeps defaulting the "Host Address" to the Storage FQDN, so I 
keep changing it to the correct fqdn.

Sent with ProtonMail Secure Email.

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Saturday, March 20, 2021 4:30 PM, David White <[email protected]> 
wrote:

> There may be a bug in the latest installer. Or I might have missed a step 
> somewhere.
> I did use the 4.4.5 installer for hyperconverged wizard, yes.
> 

> I'm currently in the Engine console right now, and I only see 1 host.
> I've navigated to Compute -> Hosts.
> 

> That said, when I navigate to Compute -> Clusters -> Default, I see this 
> message:
> Some new hosts are detected in the cluster. You can Import them to engine or 
> Detach them from the cluster.
> 

> I clicked on Import to try to import them into the engine.
> On the next screen, I see the other two physical hosts.
> 

> I verified the Gluster peer address, as well as the front-end Host address, 
> typed in the root password, and clicked OK. The system acted like it was 
> doing stuff, but then eventually I landed back on the same "Add Hosts" screen 
> as before:
> 

> [Screenshot from 2021-03-20 16-28-56.png]
> 

> Am I missing something?
> 

> Sent with ProtonMail Secure Email.
> 

> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
> On Saturday, March 20, 2021 4:17 PM, Jayme <[email protected]> wrote:
> 

> > If you deployed with wizard the hosted engine should already be HA and can 
> > run on any host. I’d you look at GUI you will see a crown beside each host 
> > that is capable of running the hostess engine. 
> > 

> > On Sat, Mar 20, 2021 at 5:14 PM David White via Users <[email protected]> 
> > wrote:
> > 

> > > I just finished deploying oVirt 4.4.5 onto a 3-node hyperconverged 
> > > cluster running on Red Hat 8.3 OS.
> > > 

> > > Over the course of the setup, I noticed that I had to setup the storage 
> > > for the engine separately from the gluster bricks. 
> > > 

> > > It looks like the engine was installed onto /rhev/data-center/ on the 
> > > first host, whereas the gluster bricks for all 3 hosts are on 
> > > /gluster_bricks/.
> > > 

> > > I fear that I may already know the answer to this, but:
> > > Is it possible to make the engine highly available?
> > > 

> > > Also, thinking hypothetically here, what would happen to my VMs that are 
> > > physically on the first server, if the first server crashed? The engine 
> > > is what handles the high availability, correct? So what if a VM was 
> > > running on the first host? There would be nothing to automatically "move" 
> > > it to one of the remaining healthy hosts.
> > > 

> > > Or am I misunderstanding something here?
> > > 

> > > Sent with ProtonMail Secure Email.
> > > 

> > > _______________________________________________
> > > Users mailing list -- [email protected]
> > > To unsubscribe send an email to [email protected]
> > > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > > oVirt Code of Conduct: 
> > > https://www.ovirt.org/community/about/community-guidelines/
> > > List Archives: 
> > > https://lists.ovirt.org/archives/list/[email protected]/message/L6MMZSMSGIK7BTUSUECU65VZRMS4N33L/

Attachment: publickey - [email protected] - 0x320CD582.asc
Description: application/pgp-keys

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/FAUV7NHLSOHIU4NUS6IWA66C775T6CQM/

Reply via email to