[ovirt-users] Re: oVirt alternatives

2022-02-20 Thread Sandro Bonazzola
Il giorno dom 20 feb 2022 alle ore 22:47 Nathanaël Blanchet <
blanc...@abes.fr> ha scritto:

> Hello, Is okd/openshift virtualization designed to be a full replacement
> of ovirt/redhat by embedding the same level of advanced
>

oVirt is a very mature project, integrated with most of the Red Hat
ecosystem, mostly being maintained without any new big features.
It has live-snapshot, live-storage-migration, memory overcommit
management, passthrough of a very specific PCI device on a particular host,
a VM portal, OpenShift IPI.
It lacks integrated container management.

OKD Virtualization is being very actively developed quickly closing gaps.
It has integrated container management, ability to leverage the k8s
distributed architecture/infrastructure and to leverage k8s assets like
exclusive CPU placements.
It currently lacks live-snapshot, live-storage-migration, memory overcommit
management, passthrough of a very specific PCI device on a particular
host, VM portal (OKD UI is more similar to Admin portal), thin-provisioning
(of VMs on top of templates), hot (un)plug (disk/memory/NIC), high
availability with VM leases, incremental backup, VDI features like template
versions, sealing (virt-sysprep).

So OKD is not feature complete replacement for oVirt yet.

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FJLPNFV6ITEB3ZDARLIM5MBY4EJU6TY2/


[ovirt-users] unable to download and install ldap package to integrate with AD. oVirt Engine version - ovirt-engine-4.4.9.5-1

2022-02-20 Thread umakanta.samantaray--- via Users
When trying to installing the package  ovirt-engine-extension-aaa-ldap-setup, 
receiving below error message, can you please help 

# sudo yum install ovirt-engine-extension-aaa-ldap-setup
Last metadata expiration check: 0:08:27 ago on Monday 21 February 2022 11:41:51 
AM IST.
Error:
 Problem: cannot install the best candidate for the job
  - nothing provides python3-ldap needed by 
ovirt-engine-extension-aaa-ldap-setup-1.4.5-1.el8.noarch
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use 
not only best candidate packages)

# sudo yum install ovirt-engine-extension-aaa-ldap-setup --nobest
Last metadata expiration check: 0:16:27 ago on Monday 21 February 2022 11:41:51 
AM IST.
Error:
 Problem: conflicting requests
  - nothing provides python3-ldap needed by 
ovirt-engine-extension-aaa-ldap-setup-1.4.0-1.el8.noarch
  - nothing provides python3-ldap needed by 
ovirt-engine-extension-aaa-ldap-setup-1.4.1-1.el8.noarch
  - nothing provides python3-ldap needed by 
ovirt-engine-extension-aaa-ldap-setup-1.4.2-1.el8.noarch
  - nothing provides python3-ldap needed by 
ovirt-engine-extension-aaa-ldap-setup-1.4.3-1.el8.noarch
  - nothing provides python3-ldap needed by 
ovirt-engine-extension-aaa-ldap-setup-1.4.4-1.el8.noarch
  - nothing provides python3-ldap needed by 
ovirt-engine-extension-aaa-ldap-setup-1.4.5-1.el8.noarch
(try to add '--skip-broken' to skip uninstallable packages)

# sudo yum install ovirt-engine-extension-aaa-ldap-setup --skip-broken
Last metadata expiration check: 0:16:58 ago on Monday 21 February 2022 11:41:51 
AM IST.
Dependencies resolved.
Nothing to do.
Complete!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UVW5VYG7TFLD4WCFSBCEQLB463ZFGSPV/


[ovirt-users] Re: oVirt alternatives

2022-02-20 Thread Strahil Nikolov via Users
Openshift/Kubernetes virtualization is not as feature-rich as oVirt (based on 
what I read).
Best Regards,Strahil Nikolov
 
 
  On Sun, Feb 20, 2022 at 23:49, Nathanaël Blanchet wrote:   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QZKA2VEMG6B3MOJL2RV6RS5W6XSHNQDB/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LINUBUPE5CSTT5BHNCONVBODEDVZ627X/


[ovirt-users] Re: Certificate expiration

2022-02-20 Thread Strahil Nikolov via Users
Take a backup of the engine, if you haven't done so far.
Then with the virsh alias try to migrare:ssh root@ 'uptime'virsh 
migrate --live HostedEngine qemu+ssh:///system
 Best Regards,Strahil Nikolov
 
  On Sun, Feb 20, 2022 at 17:18, Joseph Gelinas wrote:   No. 
I don't have any of the options under Installation.

> On Feb 20, 2022, at 07:52, Strahil Nikolov via Users  wrote:
> 
> Do you have the option to use 'Install' -> enroll certificate (or whatever is 
> the entry in UI ) ?
> 
> Best Regards,
> Strahil Nikolov
> 
> On Sun, Feb 20, 2022 at 8:05, Joseph Gelinas
>  wrote:
> Both I guess. The host certificates expired on the 15th the console expires 
> on the 23. Right now since the engine sees the hosts as unassigned I don't 
> get the option to set hosts to maintenance mode and if I try to set Enable 
> Global Maintenance I get the message: "Cannot edit VM Cluster. Operation can 
> be performed only when Hoist status is Up."
> 
> 
> > On Feb 19, 2022, at 14:55, Strahil Nikolov  wrote:
> > 
> > Is your issue with the host certificates or the engine ?
> > 
> > You can try to set a node in maintenance (or at least try that) and then 
> > try to reenroll the certificate from the UI.
> > 
> > Best Regards,
> > Strahil Nikolov
> > 
> > On Sat, Feb 19, 2022 at 9:48, Joseph Gelinas
> >  wrote:
> > I believe I ran `hosted-engine --deploy` on ovirt-1 to see if there was an 
> > option to reenroll that way, but when it prompted and asked if it was 
> > really what I wanted to do I ctrl-D or said no and it ran something 
> > anyways, so I ctrl-C out of it and maybe that is what messed up vdsm on 
> > that node. Not sure about ovirt-3, is there a way to fix that?
> > 
> > > On Feb 18, 2022, at 17:21, Joseph Gelinas  wrote:
> > > 
> > > Unfortunately ovirt-ha-broker & ovirt-ha-agent are just in continual 
> > > restart loops on ovirt-1 & ovirt-3 (ovirt-engine is currently on ovirt-3).
> > > 
> > > The output for broker.log:
> > > 
> > > MainThread::ERROR::2022-02-18 
> > > 22:08:58,101::broker::72::ovirt_hosted_engine_ha.broker.broker.Broker::(run)
> > >  Trying to restart the broker
> > > MainThread::INFO::2022-02-18 
> > > 22:08:58,453::broker::47::ovirt_hosted_engine_ha.broker.broker.Broker::(run)
> > >  ovirt-hosted-engine-ha broker 2.4.5 started
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,456::monitor::45::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Searching for submonitors in 
> > > /usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/submonitors
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,456::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Loaded submonitor mem-free
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,457::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Loaded submonitor engine-health
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,459::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Loaded submonitor cpu-load-no-engine
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,459::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Loaded submonitor mgmt-bridge
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,459::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Loaded submonitor network
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,460::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Loaded submonitor storage-domain
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,460::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Loaded submonitor cpu-load
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,460::monitor::63::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Finished loading submonitors
> > > MainThread::WARNING::2022-02-18 
> > > 22:10:00,788::storage_broker::100::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(__init__)
> > >  Can't connect vdsm storage: Couldn't  connect to VDSM within 60 seconds 
> > > MainThread::ERROR::2022-02-18 
> > > 22:10:00,788::broker::69::ovirt_hosted_engine_ha.broker.broker.Broker::(run)
> > >  Failed initializing the broker: Couldn't  connect to VDSM within 60 
> > > seconds
> > > MainThread::ERROR::2022-02-18 
> > > 22:10:00,789::broker::71::ovirt_hosted_engine_ha.broker.broker.Broker::(run)
> > >  Traceback (most recent call last):
> > >  File 
> > >"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/broker.py",
> > > line 64, in run
> > >    self._storage_broker_instance = self._get_storage_broker()
> > >  File 
> > >"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/broker.py",
> > > line 143, in _get_storage_broker
> > >    return storage_broker.StorageBroker()
> > >  File 
> > >"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/

[ovirt-users] Re: oVirt alternatives

2022-02-20 Thread Nathanaël Blanchet
Hello, Is okd/openshift virtualization designed to be a full replacement of 
ovirt/redhat by embedding the same level of advanced

Il giorno dom 6 feb 2022 alle ore 14:06 Wesley Stewart 
ha scritto:

> Has anyone tried the open shift upstream old?  Looks like they support
> virtualization now.  Which I'm guessing is the upstream for openshift
> virtualization?
>
> https://docs.okd.io/latest/virt/about-virt.html
>

I gave a presentation about it 2 days ago at FOSDEM:
https://fosdem.org/2022/schedule/event/vai_intro_okd/
but looks like recordings are not yet available at
https://video.fosdem.org/2022/
Slides are here:
https://fosdem.org/2022/schedule/event/vai_intro_okd/attachments/slides/4843/export/events/attachments/vai_intro_okd/slides/4843/OKD_Virtualization_Community.pdf




>
>
>
> On Sat, Feb 5, 2022, 10:34 PM Alex McWhirter  wrote:
>
>> Oh i have spent years looking.
>>
>> ProxMox is probably the closest option, but has no multi-clustering
>> support. The clusters are more or less isolated from each other, and
>> would need another layer if you needed the ability to migrate between
>> them.
>>
>> XCP-ng, cool. No spice support. No UI for managing clustered storage
>> that is open source.
>>
>> Harvester, probably the closest / newest contender. Needs a lot more
>> attention / work.
>>
>> OpenNebula, more like a DIY AWS than anything else, but was functional
>> last i played with it.
>>
>>
>>
>> Has anyone actually played with OpenShift virtualization (replaces RHV)?
>> Wonder if OKD supports it with a similar model?
>>
>> On 2022-02-05 07:40, Thomas Hoberg wrote:
>> > There is unfortunately no formal announcement on the fate of oVirt,
>> > but with RHGS and RHV having a known end-of-life, oVirt may well shut
>> > down in Q2.
>> >
>> > So it's time to hunt for an alternative for those of us to came to
>> > oVirt because they had already rejected vSAN or Nutanix.
>> >
>> > Let's post what we find here in this thread.
>> > ___
>> > Users mailing list -- users@ovirt.org
>> > To unsubscribe send an email to users-le...@ovirt.org
>> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> > oVirt Code of Conduct:
>> > https://www.ovirt.org/community/about/community-guidelines/
>> > List Archives:
>> >
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/R4YFNNCTW5VVVRKSV2OORQ2UWZ2MTUDD/
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EYPC6QXF55UCQPMQL5LDU6XMAF2CZOEG/
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z6JTGNYABYPZHHZ3F5Y75KF3KYDWV5OC/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JSNPCQPTN32WIFEDI3LR6X63ELE57GMM/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QZKA2VEMG6B3MOJL2RV6RS5W6XSHNQDB/


[ovirt-users] Re: dnf update fails with oVirt 4.4 on centos 8 stream due to ansible package conflicts.

2022-02-20 Thread Gilboa Davara
I managed to upgrade a couple of 8-streams based clusters w/ --nobest, and
thus far, I've yet to experience any issues (knocks wood feaviously).

- Gilboa

On Sat, Feb 19, 2022 at 3:21 PM Daniel McCoshen 
wrote:

> Hey all,
> I'm running ovirt 4.4 in production (4.4.5-11-1.el8), and I'm attempting
> to update the OS on my hosts. The hosts are all centos 8 stream, and dnf
> update is failing on all of them with the following output:
>
> [root@ovirthost ~]# dnf update
> Last metadata expiration check: 1:36:32 ago on Thu 17 Feb 2022 12:01:25 PM
> CST.
> Error:
>  Problem: package cockpit-ovirt-dashboard-0.15.1-1.el8.noarch requires
> ansible, but none of the providers can be installed
>   - package ansible-2.9.27-2.el8.noarch conflicts with ansible-core >
> 2.11.0 provided by ansible-core-2.12.2-2.el8.x86_64
>   - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
> provided by ansible-2.9.27-2.el8.noarch
>   - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
> provided by ansible-2.9.27-1.el8.noarch
>   - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
> provided by ansible-2.9.17-1.el8.noarch
>   - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
> provided by ansible-2.9.18-2.el8.noarch
>   - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
> provided by ansible-2.9.20-2.el8.noarch
>   - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
> provided by ansible-2.9.21-2.el8.noarch
>   - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
> provided by ansible-2.9.23-2.el8.noarch
>   - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
> provided by ansible-2.9.24-2.el8.noarch
>   - cannot install the best update candidate for package
> cockpit-ovirt-dashboard-0.15.1-1.el8.noarch
>   - cannot install the best update candidate for package
> ansible-2.9.27-2.el8.noarch
>   - package ansible-2.9.20-1.el8.noarch is filtered out by exclude
> filtering
>   - package ansible-2.9.16-1.el8.noarch is filtered out by exclude
> filtering
>   - package ansible-2.9.19-1.el8.noarch is filtered out by exclude
> filtering
>   - package ansible-2.9.23-1.el8.noarch is filtered out by exclude
> filtering
> (try to add '--allowerasing' to command line to replace conflicting
> packages or '--skip-broken' to skip uninstallable packages or '--nobest' to
> use not only best candidate packages)
>
> cockpit-ovirt-dashboard.noarch is at 0.15.1-1.el8, and it looks like that
> conflicting ansible-core package was added to the 8-stream repo two days
> ago. That's when I first noticed the issue, but I it might be older. When
> the eariler issues with the centos 8 deprecation happened, I had swapped
> out the repos on some of these hosts for the new ones, and have since added
> new hosts as well, using the updated repos. Both hosts that had been moved
> from the old repos, and ones created with the new repos are experienceing
> this issue.
>
> ansible-core is being pulled from the centos 8 stream AppStream repo, and
> the ansible package that cockpit-ovirt-dashboard.noarch is trying to use as
> a dependency is comming from ovirt-4.4-centos-ovirt44
>
> I'm tempted to blacklist ansible-core in my dnf conf, but that seems like
> a hacky work-around and not the actual fix here.
> Thanks,
> Dan
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3N4ZO6LXNOQNQU5HHDGNOZHDSO4IBGFF/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YJ336275NTG4M4AUOBTJNBZ3RD2L6HXA/


[ovirt-users] Re: oVirt alternatives

2022-02-20 Thread Wesley Stewart
Thanks, I'll check them out.

On Mon, Feb 7, 2022, 3:56 AM Sandro Bonazzola  wrote:

>
>
> Il giorno dom 6 feb 2022 alle ore 14:06 Wesley Stewart <
> wstewa...@gmail.com> ha scritto:
>
>> Has anyone tried the open shift upstream old?  Looks like they support
>> virtualization now.  Which I'm guessing is the upstream for openshift
>> virtualization?
>>
>> https://docs.okd.io/latest/virt/about-virt.html
>>
>
> I gave a presentation about it 2 days ago at FOSDEM:
> https://fosdem.org/2022/schedule/event/vai_intro_okd/
> but looks like recordings are not yet available at
> https://video.fosdem.org/2022/
> Slides are here:
> https://fosdem.org/2022/schedule/event/vai_intro_okd/attachments/slides/4843/export/events/attachments/vai_intro_okd/slides/4843/OKD_Virtualization_Community.pdf
>
>
>
>
>>
>>
>>
>> On Sat, Feb 5, 2022, 10:34 PM Alex McWhirter  wrote:
>>
>>> Oh i have spent years looking.
>>>
>>> ProxMox is probably the closest option, but has no multi-clustering
>>> support. The clusters are more or less isolated from each other, and
>>> would need another layer if you needed the ability to migrate between
>>> them.
>>>
>>> XCP-ng, cool. No spice support. No UI for managing clustered storage
>>> that is open source.
>>>
>>> Harvester, probably the closest / newest contender. Needs a lot more
>>> attention / work.
>>>
>>> OpenNebula, more like a DIY AWS than anything else, but was functional
>>> last i played with it.
>>>
>>>
>>>
>>> Has anyone actually played with OpenShift virtualization (replaces RHV)?
>>> Wonder if OKD supports it with a similar model?
>>>
>>> On 2022-02-05 07:40, Thomas Hoberg wrote:
>>> > There is unfortunately no formal announcement on the fate of oVirt,
>>> > but with RHGS and RHV having a known end-of-life, oVirt may well shut
>>> > down in Q2.
>>> >
>>> > So it's time to hunt for an alternative for those of us to came to
>>> > oVirt because they had already rejected vSAN or Nutanix.
>>> >
>>> > Let's post what we find here in this thread.
>>> > ___
>>> > Users mailing list -- users@ovirt.org
>>> > To unsubscribe send an email to users-le...@ovirt.org
>>> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> > oVirt Code of Conduct:
>>> > https://www.ovirt.org/community/about/community-guidelines/
>>> > List Archives:
>>> >
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/R4YFNNCTW5VVVRKSV2OORQ2UWZ2MTUDD/
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EYPC6QXF55UCQPMQL5LDU6XMAF2CZOEG/
>>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z6JTGNYABYPZHHZ3F5Y75KF3KYDWV5OC/
>>
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
>
> *Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.*
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KAJFH5BOH6PZ3JMZI7UE2VSLQPMROTFQ/


[ovirt-users] Re: Certificate expiration

2022-02-20 Thread Joseph Gelinas
No. I don't have any of the options under Installation.

> On Feb 20, 2022, at 07:52, Strahil Nikolov via Users  wrote:
> 
> Do you have the option to use 'Install' -> enroll certificate (or whatever is 
> the entry in UI ) ?
> 
> Best Regards,
> Strahil Nikolov
> 
> On Sun, Feb 20, 2022 at 8:05, Joseph Gelinas
>  wrote:
> Both I guess. The host certificates expired on the 15th the console expires 
> on the 23. Right now since the engine sees the hosts as unassigned I don't 
> get the option to set hosts to maintenance mode and if I try to set Enable 
> Global Maintenance I get the message: "Cannot edit VM Cluster. Operation can 
> be performed only when Hoist status is Up."
> 
> 
> > On Feb 19, 2022, at 14:55, Strahil Nikolov  wrote:
> > 
> > Is your issue with the host certificates or the engine ?
> > 
> > You can try to set a node in maintenance (or at least try that) and then 
> > try to reenroll the certificate from the UI.
> > 
> > Best Regards,
> > Strahil Nikolov
> > 
> > On Sat, Feb 19, 2022 at 9:48, Joseph Gelinas
> >  wrote:
> > I believe I ran `hosted-engine --deploy` on ovirt-1 to see if there was an 
> > option to reenroll that way, but when it prompted and asked if it was 
> > really what I wanted to do I ctrl-D or said no and it ran something 
> > anyways, so I ctrl-C out of it and maybe that is what messed up vdsm on 
> > that node. Not sure about ovirt-3, is there a way to fix that?
> > 
> > > On Feb 18, 2022, at 17:21, Joseph Gelinas  wrote:
> > > 
> > > Unfortunately ovirt-ha-broker & ovirt-ha-agent are just in continual 
> > > restart loops on ovirt-1 & ovirt-3 (ovirt-engine is currently on ovirt-3).
> > > 
> > > The output for broker.log:
> > > 
> > > MainThread::ERROR::2022-02-18 
> > > 22:08:58,101::broker::72::ovirt_hosted_engine_ha.broker.broker.Broker::(run)
> > >  Trying to restart the broker
> > > MainThread::INFO::2022-02-18 
> > > 22:08:58,453::broker::47::ovirt_hosted_engine_ha.broker.broker.Broker::(run)
> > >  ovirt-hosted-engine-ha broker 2.4.5 started
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,456::monitor::45::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Searching for submonitors in 
> > > /usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/submonitors
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,456::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Loaded submonitor mem-free
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,457::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Loaded submonitor engine-health
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,459::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Loaded submonitor cpu-load-no-engine
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,459::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Loaded submonitor mgmt-bridge
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,459::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Loaded submonitor network
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,460::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Loaded submonitor storage-domain
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,460::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Loaded submonitor cpu-load
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,460::monitor::63::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Finished loading submonitors
> > > MainThread::WARNING::2022-02-18 
> > > 22:10:00,788::storage_broker::100::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(__init__)
> > >  Can't connect vdsm storage: Couldn't  connect to VDSM within 60 seconds 
> > > MainThread::ERROR::2022-02-18 
> > > 22:10:00,788::broker::69::ovirt_hosted_engine_ha.broker.broker.Broker::(run)
> > >  Failed initializing the broker: Couldn't  connect to VDSM within 60 
> > > seconds
> > > MainThread::ERROR::2022-02-18 
> > > 22:10:00,789::broker::71::ovirt_hosted_engine_ha.broker.broker.Broker::(run)
> > >  Traceback (most recent call last):
> > >  File 
> > > "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/broker.py",
> > >  line 64, in run
> > >self._storage_broker_instance = self._get_storage_broker()
> > >  File 
> > > "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/broker.py",
> > >  line 143, in _get_storage_broker
> > >return storage_broker.StorageBroker()
> > >  File 
> > > "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
> > >  line 97, in __init__
> > >self._backend.connect()
> > >  File 
> > > "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py",
> > >  line 370, in connect
> > >connection = util.connect_vd

[ovirt-users] Re: Certificate expiration

2022-02-20 Thread Joseph Gelinas
Is there a way to do so without the web frontend? As I don't have option to 
migrate it.

> On Feb 20, 2022, at 07:56, Strahil Nikolov via Users  wrote:
> 
> Did you manage to move the engine VM to the only node that's in global 
> maintenance ?
> 
> Best Regards,
> Strahil Nikolov
> 
> On Sun, Feb 20, 2022 at 8:05, Joseph Gelinas
>  wrote:
> Both I guess. The host certificates expired on the 15th the console expires 
> on the 23. Right now since the engine sees the hosts as unassigned I don't 
> get the option to set hosts to maintenance mode and if I try to set Enable 
> Global Maintenance I get the message: "Cannot edit VM Cluster. Operation can 
> be performed only when Hoist status is Up."
> 
> 
> > On Feb 19, 2022, at 14:55, Strahil Nikolov  wrote:
> > 
> > Is your issue with the host certificates or the engine ?
> > 
> > You can try to set a node in maintenance (or at least try that) and then 
> > try to reenroll the certificate from the UI.
> > 
> > Best Regards,
> > Strahil Nikolov
> > 
> > On Sat, Feb 19, 2022 at 9:48, Joseph Gelinas
> >  wrote:
> > I believe I ran `hosted-engine --deploy` on ovirt-1 to see if there was an 
> > option to reenroll that way, but when it prompted and asked if it was 
> > really what I wanted to do I ctrl-D or said no and it ran something 
> > anyways, so I ctrl-C out of it and maybe that is what messed up vdsm on 
> > that node. Not sure about ovirt-3, is there a way to fix that?
> > 
> > > On Feb 18, 2022, at 17:21, Joseph Gelinas  wrote:
> > > 
> > > Unfortunately ovirt-ha-broker & ovirt-ha-agent are just in continual 
> > > restart loops on ovirt-1 & ovirt-3 (ovirt-engine is currently on ovirt-3).
> > > 
> > > The output for broker.log:
> > > 
> > > MainThread::ERROR::2022-02-18 
> > > 22:08:58,101::broker::72::ovirt_hosted_engine_ha.broker.broker.Broker::(run)
> > >  Trying to restart the broker
> > > MainThread::INFO::2022-02-18 
> > > 22:08:58,453::broker::47::ovirt_hosted_engine_ha.broker.broker.Broker::(run)
> > >  ovirt-hosted-engine-ha broker 2.4.5 started
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,456::monitor::45::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Searching for submonitors in 
> > > /usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/submonitors
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,456::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Loaded submonitor mem-free
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,457::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Loaded submonitor engine-health
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,459::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Loaded submonitor cpu-load-no-engine
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,459::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Loaded submonitor mgmt-bridge
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,459::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Loaded submonitor network
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,460::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Loaded submonitor storage-domain
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,460::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Loaded submonitor cpu-load
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,460::monitor::63::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Finished loading submonitors
> > > MainThread::WARNING::2022-02-18 
> > > 22:10:00,788::storage_broker::100::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(__init__)
> > >  Can't connect vdsm storage: Couldn't  connect to VDSM within 60 seconds 
> > > MainThread::ERROR::2022-02-18 
> > > 22:10:00,788::broker::69::ovirt_hosted_engine_ha.broker.broker.Broker::(run)
> > >  Failed initializing the broker: Couldn't  connect to VDSM within 60 
> > > seconds
> > > MainThread::ERROR::2022-02-18 
> > > 22:10:00,789::broker::71::ovirt_hosted_engine_ha.broker.broker.Broker::(run)
> > >  Traceback (most recent call last):
> > >  File 
> > > "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/broker.py",
> > >  line 64, in run
> > >self._storage_broker_instance = self._get_storage_broker()
> > >  File 
> > > "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/broker.py",
> > >  line 143, in _get_storage_broker
> > >return storage_broker.StorageBroker()
> > >  File 
> > > "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
> > >  line 97, in __init__
> > >self._backend.connect()
> > >  File 
> > > "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py",
> > >  line 370, in connect
> > >connecti

[ovirt-users] Re: VMs losing network interfaces

2022-02-20 Thread Strahil Nikolov via Users
Do you see all nic in the UI ? What type are they ? 
Set this alias on the Hypervisors:alias virsh='virsh -c 
qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf' and then use 
'virsh dumpxml name-of-vm' to identify how many nics the vm has .
If gou got the correct settings in ovirt, use 'lspci -v' .
Best Regards,Strahil Nikolov
 
 
  On Sun, Feb 20, 2022 at 11:32, Jonathan Baecker wrote:   
Hello everybody,

I have here a strange behavior: We have a 3 node self hosted cluster 
with around 20 VMs running on it. Since longer I had the problem with 
one VM that after some days it lose the network interface. But because 
this VM was only for testing I was to lazy to dive more in, to figure 
out what is happen.

Now I have a second VM, with the same problem and this VM is more 
important. Both VMs running debian 10 and use cifs mounts, so maybe that 
is related?

Have some one of you seeing this behavior? And can give me a hint, how I 
can fix this?

At the moment I can't provide a log file, because I didn't know the 
exact time, when this was happen. And I also don't know, if the problem 
comes from ovirt or from the operating system inside the VMs.

Have a nice day!

Jonathan

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DOXZRQ55LFPNKUVS3AWIPXQDJIVH3X7M/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TETDRN3OA5XIZYKY7AJCH5Z4UJXAKIUL/


[ovirt-users] Re: Certificate expiration

2022-02-20 Thread Joseph Gelinas
Right, I don't have those options, because the hosts are listed as unassigned. 
I can't migrate the engine. I can't put anything into maintenance so the 
installation menu becomes available.
 

> On Feb 20, 2022, at 07:52, Strahil Nikolov  wrote:
> 
> Do you have the option to use 'Install' -> enroll certificate (or whatever is 
> the entry in UI ) ?
> 
> Best Regards,
> Strahil Nikolov
> 
> On Sun, Feb 20, 2022 at 8:05, Joseph Gelinas
>  wrote:
> Both I guess. The host certificates expired on the 15th the console expires 
> on the 23. Right now since the engine sees the hosts as unassigned I don't 
> get the option to set hosts to maintenance mode and if I try to set Enable 
> Global Maintenance I get the message: "Cannot edit VM Cluster. Operation can 
> be performed only when Hoist status is Up."
> 
> 
> > On Feb 19, 2022, at 14:55, Strahil Nikolov  wrote:
> > 
> > Is your issue with the host certificates or the engine ?
> > 
> > You can try to set a node in maintenance (or at least try that) and then 
> > try to reenroll the certificate from the UI.
> > 
> > Best Regards,
> > Strahil Nikolov
> > 
> > On Sat, Feb 19, 2022 at 9:48, Joseph Gelinas
> >  wrote:
> > I believe I ran `hosted-engine --deploy` on ovirt-1 to see if there was an 
> > option to reenroll that way, but when it prompted and asked if it was 
> > really what I wanted to do I ctrl-D or said no and it ran something 
> > anyways, so I ctrl-C out of it and maybe that is what messed up vdsm on 
> > that node. Not sure about ovirt-3, is there a way to fix that?
> > 
> > > On Feb 18, 2022, at 17:21, Joseph Gelinas  wrote:
> > > 
> > > Unfortunately ovirt-ha-broker & ovirt-ha-agent are just in continual 
> > > restart loops on ovirt-1 & ovirt-3 (ovirt-engine is currently on ovirt-3).
> > > 
> > > The output for broker.log:
> > > 
> > > MainThread::ERROR::2022-02-18 
> > > 22:08:58,101::broker::72::ovirt_hosted_engine_ha.broker.broker.Broker::(run)
> > >  Trying to restart the broker
> > > MainThread::INFO::2022-02-18 
> > > 22:08:58,453::broker::47::ovirt_hosted_engine_ha.broker.broker.Broker::(run)
> > >  ovirt-hosted-engine-ha broker 2.4.5 started
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,456::monitor::45::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Searching for submonitors in 
> > > /usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/submonitors
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,456::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Loaded submonitor mem-free
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,457::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Loaded submonitor engine-health
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,459::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Loaded submonitor cpu-load-no-engine
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,459::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Loaded submonitor mgmt-bridge
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,459::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Loaded submonitor network
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,460::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Loaded submonitor storage-domain
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,460::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Loaded submonitor cpu-load
> > > MainThread::INFO::2022-02-18 
> > > 22:09:00,460::monitor::63::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> > >  Finished loading submonitors
> > > MainThread::WARNING::2022-02-18 
> > > 22:10:00,788::storage_broker::100::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(__init__)
> > >  Can't connect vdsm storage: Couldn't  connect to VDSM within 60 seconds 
> > > MainThread::ERROR::2022-02-18 
> > > 22:10:00,788::broker::69::ovirt_hosted_engine_ha.broker.broker.Broker::(run)
> > >  Failed initializing the broker: Couldn't  connect to VDSM within 60 
> > > seconds
> > > MainThread::ERROR::2022-02-18 
> > > 22:10:00,789::broker::71::ovirt_hosted_engine_ha.broker.broker.Broker::(run)
> > >  Traceback (most recent call last):
> > >  File 
> > > "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/broker.py",
> > >  line 64, in run
> > >self._storage_broker_instance = self._get_storage_broker()
> > >  File 
> > > "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/broker.py",
> > >  line 143, in _get_storage_broker
> > >return storage_broker.StorageBroker()
> > >  File 
> > > "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
> > >  line 97, in __init__
> > >self._backend.connect()
> > >  File 
> > > "/usr/lib/python3.6/s

[ovirt-users] Re: Certificate expiration

2022-02-20 Thread Strahil Nikolov via Users
Do you have the option to use 'Install' -> enroll certificate (or whatever is 
the entry in UI ) ?
Best Regards,Strahil Nikolov
 
 
  On Sun, Feb 20, 2022 at 8:05, Joseph Gelinas wrote:   Both 
I guess. The host certificates expired on the 15th the console expires on the 
23. Right now since the engine sees the hosts as unassigned I don't get the 
option to set hosts to maintenance mode and if I try to set Enable Global 
Maintenance I get the message: "Cannot edit VM Cluster. Operation can be 
performed only when Hoist status is Up."


> On Feb 19, 2022, at 14:55, Strahil Nikolov  wrote:
> 
> Is your issue with the host certificates or the engine ?
> 
> You can try to set a node in maintenance (or at least try that) and then try 
> to reenroll the certificate from the UI.
> 
> Best Regards,
> Strahil Nikolov
> 
> On Sat, Feb 19, 2022 at 9:48, Joseph Gelinas
>  wrote:
> I believe I ran `hosted-engine --deploy` on ovirt-1 to see if there was an 
> option to reenroll that way, but when it prompted and asked if it was really 
> what I wanted to do I ctrl-D or said no and it ran something anyways, so I 
> ctrl-C out of it and maybe that is what messed up vdsm on that node. Not sure 
> about ovirt-3, is there a way to fix that?
> 
> > On Feb 18, 2022, at 17:21, Joseph Gelinas  wrote:
> > 
> > Unfortunately ovirt-ha-broker & ovirt-ha-agent are just in continual 
> > restart loops on ovirt-1 & ovirt-3 (ovirt-engine is currently on ovirt-3).
> > 
> > The output for broker.log:
> > 
> > MainThread::ERROR::2022-02-18 
> > 22:08:58,101::broker::72::ovirt_hosted_engine_ha.broker.broker.Broker::(run)
> >  Trying to restart the broker
> > MainThread::INFO::2022-02-18 
> > 22:08:58,453::broker::47::ovirt_hosted_engine_ha.broker.broker.Broker::(run)
> >  ovirt-hosted-engine-ha broker 2.4.5 started
> > MainThread::INFO::2022-02-18 
> > 22:09:00,456::monitor::45::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> >  Searching for submonitors in 
> > /usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/submonitors
> > MainThread::INFO::2022-02-18 
> > 22:09:00,456::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> >  Loaded submonitor mem-free
> > MainThread::INFO::2022-02-18 
> > 22:09:00,457::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> >  Loaded submonitor engine-health
> > MainThread::INFO::2022-02-18 
> > 22:09:00,459::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> >  Loaded submonitor cpu-load-no-engine
> > MainThread::INFO::2022-02-18 
> > 22:09:00,459::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> >  Loaded submonitor mgmt-bridge
> > MainThread::INFO::2022-02-18 
> > 22:09:00,459::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> >  Loaded submonitor network
> > MainThread::INFO::2022-02-18 
> > 22:09:00,460::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> >  Loaded submonitor storage-domain
> > MainThread::INFO::2022-02-18 
> > 22:09:00,460::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> >  Loaded submonitor cpu-load
> > MainThread::INFO::2022-02-18 
> > 22:09:00,460::monitor::63::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> >  Finished loading submonitors
> > MainThread::WARNING::2022-02-18 
> > 22:10:00,788::storage_broker::100::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(__init__)
> >  Can't connect vdsm storage: Couldn't  connect to VDSM within 60 seconds 
> > MainThread::ERROR::2022-02-18 
> > 22:10:00,788::broker::69::ovirt_hosted_engine_ha.broker.broker.Broker::(run)
> >  Failed initializing the broker: Couldn't  connect to VDSM within 60 seconds
> > MainThread::ERROR::2022-02-18 
> > 22:10:00,789::broker::71::ovirt_hosted_engine_ha.broker.broker.Broker::(run)
> >  Traceback (most recent call last):
> >  File 
> >"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/broker.py", 
> >line 64, in run
> >    self._storage_broker_instance = self._get_storage_broker()
> >  File 
> >"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/broker.py", 
> >line 143, in _get_storage_broker
> >    return storage_broker.StorageBroker()
> >  File 
> >"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
> > line 97, in __init__
> >    self._backend.connect()
> >  File 
> >"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py",
> > line 370, in connect
> >    connection = util.connect_vdsm_json_rpc(logger=self._logger)
> >  File 
> >"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/util.py", line 
> >472, in connect_vdsm_json_rpc
> >    __vdsm_json_rpc_connect(logger, timeout)
> >  File 
> >"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/util.py", line 
> >415, in __vdsm_json_rpc_connect

[ovirt-users] Re: Certificate expiration

2022-02-20 Thread Strahil Nikolov via Users
Did you manage to move the engine VM to the only node that's in global 
maintenance ?
Best Regards,Strahil Nikolov
 
 
  On Sun, Feb 20, 2022 at 8:05, Joseph Gelinas wrote:   Both 
I guess. The host certificates expired on the 15th the console expires on the 
23. Right now since the engine sees the hosts as unassigned I don't get the 
option to set hosts to maintenance mode and if I try to set Enable Global 
Maintenance I get the message: "Cannot edit VM Cluster. Operation can be 
performed only when Hoist status is Up."


> On Feb 19, 2022, at 14:55, Strahil Nikolov  wrote:
> 
> Is your issue with the host certificates or the engine ?
> 
> You can try to set a node in maintenance (or at least try that) and then try 
> to reenroll the certificate from the UI.
> 
> Best Regards,
> Strahil Nikolov
> 
> On Sat, Feb 19, 2022 at 9:48, Joseph Gelinas
>  wrote:
> I believe I ran `hosted-engine --deploy` on ovirt-1 to see if there was an 
> option to reenroll that way, but when it prompted and asked if it was really 
> what I wanted to do I ctrl-D or said no and it ran something anyways, so I 
> ctrl-C out of it and maybe that is what messed up vdsm on that node. Not sure 
> about ovirt-3, is there a way to fix that?
> 
> > On Feb 18, 2022, at 17:21, Joseph Gelinas  wrote:
> > 
> > Unfortunately ovirt-ha-broker & ovirt-ha-agent are just in continual 
> > restart loops on ovirt-1 & ovirt-3 (ovirt-engine is currently on ovirt-3).
> > 
> > The output for broker.log:
> > 
> > MainThread::ERROR::2022-02-18 
> > 22:08:58,101::broker::72::ovirt_hosted_engine_ha.broker.broker.Broker::(run)
> >  Trying to restart the broker
> > MainThread::INFO::2022-02-18 
> > 22:08:58,453::broker::47::ovirt_hosted_engine_ha.broker.broker.Broker::(run)
> >  ovirt-hosted-engine-ha broker 2.4.5 started
> > MainThread::INFO::2022-02-18 
> > 22:09:00,456::monitor::45::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> >  Searching for submonitors in 
> > /usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/submonitors
> > MainThread::INFO::2022-02-18 
> > 22:09:00,456::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> >  Loaded submonitor mem-free
> > MainThread::INFO::2022-02-18 
> > 22:09:00,457::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> >  Loaded submonitor engine-health
> > MainThread::INFO::2022-02-18 
> > 22:09:00,459::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> >  Loaded submonitor cpu-load-no-engine
> > MainThread::INFO::2022-02-18 
> > 22:09:00,459::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> >  Loaded submonitor mgmt-bridge
> > MainThread::INFO::2022-02-18 
> > 22:09:00,459::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> >  Loaded submonitor network
> > MainThread::INFO::2022-02-18 
> > 22:09:00,460::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> >  Loaded submonitor storage-domain
> > MainThread::INFO::2022-02-18 
> > 22:09:00,460::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> >  Loaded submonitor cpu-load
> > MainThread::INFO::2022-02-18 
> > 22:09:00,460::monitor::63::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
> >  Finished loading submonitors
> > MainThread::WARNING::2022-02-18 
> > 22:10:00,788::storage_broker::100::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(__init__)
> >  Can't connect vdsm storage: Couldn't  connect to VDSM within 60 seconds 
> > MainThread::ERROR::2022-02-18 
> > 22:10:00,788::broker::69::ovirt_hosted_engine_ha.broker.broker.Broker::(run)
> >  Failed initializing the broker: Couldn't  connect to VDSM within 60 seconds
> > MainThread::ERROR::2022-02-18 
> > 22:10:00,789::broker::71::ovirt_hosted_engine_ha.broker.broker.Broker::(run)
> >  Traceback (most recent call last):
> >  File 
> >"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/broker.py", 
> >line 64, in run
> >    self._storage_broker_instance = self._get_storage_broker()
> >  File 
> >"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/broker.py", 
> >line 143, in _get_storage_broker
> >    return storage_broker.StorageBroker()
> >  File 
> >"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
> > line 97, in __init__
> >    self._backend.connect()
> >  File 
> >"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py",
> > line 370, in connect
> >    connection = util.connect_vdsm_json_rpc(logger=self._logger)
> >  File 
> >"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/util.py", line 
> >472, in connect_vdsm_json_rpc
> >    __vdsm_json_rpc_connect(logger, timeout)
> >  File 
> >"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/util.py", line 
> >415, in __vdsm_json_rpc_connect
> >    timeou

[ovirt-users] Re: Broke my GlusterFS somehow

2022-02-20 Thread Strahil Nikolov via Users
* gluster volume info all
 
 
  On Sun, Feb 20, 2022 at 14:46, Strahil Nikolov wrote:  
 In lrder to have an idea how to help you provide the following from all nodes 
(separate the info per node):
ip a sgluster pool listgluster peer statusgluster volume listgluster volume 
status allgluster volume all
Best Regards,Strahil Nikolov 
 
  On Sun, Feb 20, 2022 at 7:14, Patrick Hibbs wrote:   
OK, where to begin.

As for your Gluster issue, Gluster maintains it's own copy of the
configuration for each brick outside of oVirt / VDSM. As you have
changed the network config manually, you also needed to change the
Gluster config to match as well. The fact that you haven't is the
reason why Gluster failed to restart the volume.

However, In a hyperconverged configuration, oVirt maintains the gluster
configuration in it's database. Manually fixing Gluster's configuration
on the bricks themselves won't fix the engine's copy. (Believe me, I
had to fix this before myself because I didn't use hostnames initially
for the bricks. It's a pain to manually fix the database.) That copy is
used to connect the VM's to their storage. If the engine's copy doesn't
match Gluster's config, you'll have a working Gluster volume but the
hosts won't be able to start VMs.

Essentially, in a hyperconverged configuration oVirt doesn't allow
removal of host with a Gluster brick unless removal of that host won't
break Gluster and prevent the volume from running. (I.e. you can't
remove a host if doing so would cause the volume to loose quorum.)

Your options for fixing Gluster are either:
    1. Add enough new bricks to the Gluster volumes so that
removal of an old host (brick) doesn't cause quorum loss.

    - OR -

    2. Manually update the engine's database with the engine and
all hosts offline to point to the correct hosts, after manually
updating the bricks and bringing back up the volume.

The first option is your safest bet. But that assumes that the volume
is up and can accept new bricks in the first place. If not, you could
potentially still do the first option but it would require reverting
your network configuration changes on each host first.

The second option is one of last resort. This is the reason why I said
updating the interfaces manually instead of using the web interface was
a bad idea. If possible, use the first option. If not, you'd be better
off just hosing the oVirt installation and reinstalling from scratch.

If you *really* need to use the second option, you'll need to follow
these instructions on each brick:
https://serverfault.com/questions/631365/rename-a-glusterfs-peer

and then update the engine database manually to point to the correct
hostnames for each brick. (Keep in mind I am *NOT* recommending that
you do this. This information is provided for educational /
experimental purposes only.)

As for Matthew's solution, the only reason it worked at all was because
you removed and re-added the host from the cluster. Had you not done
that, VDSM would have overwritten your changes on the next host upgrade
/ reinstall, and as you have seen that solution won't completely fix a
host in a hyperconverged configuration.

As to the question about oVirt's Logical Networks, what I meant was
that oVirt doesn't care what the IP configuration is for them, and that
if you wanted to change which network the roles used you needed to do
so elsewhere in the web interface. The only thing that does matter for
each role is that all of the clients using or hosts providing that role
can communicate with each other on that interface. (I.e. If you use
"Network Bob" for storage and migration, then all hosts with a "Network
Bob" interface must be able to communicate with each other over that
interface. If you use "Network Alice" for VM consoles, then all end-
user workstations must be able to commuicate with the "Network Alice"
interface. The exact IPs, vlan IDs, routing tables, and firewall
restrictions for a logical network don't matter as long as each role
can still reach the role on other hosts over the assigned interface.)

-Patrick Hibbs

On Sun, 2022-02-20 at 01:17 +, Abe E wrote:
> So upon changing my ovirt nodes (3Hyperconverged Gluster) as well as
> my engines hostname without a hitch I had an issue with 1 node and
> somehow I did something that broke its gluster and it wouldnt
> activate,
> So the gluster service wont start and after trying to open the node
> from webgui to see what its showing in its virtualization tab I was
> able to see that it allows me to run the hyperconverged wizard using
> the existing config. Due to this i lost the engine because well the
> 3rd node is just arbiter and node 2 complained about not having
> shared storage.
> 
> This node is the one which I built ovirt gluster from so i assumed it
> would rebuild its gluster.. i accidentally clicked cleanup which got
> rid of my gluster brick mounts :)) then I tried to halt it and
> rebuild using existing configuration. Here is my issue though, am I
> ab

[ovirt-users] Re: Broke my GlusterFS somehow

2022-02-20 Thread Strahil Nikolov via Users
In lrder to have an idea how to help you provide the following from all nodes 
(separate the info per node):
ip a sgluster pool listgluster peer statusgluster volume listgluster volume 
status allgluster volume all
Best Regards,Strahil Nikolov 
 
  On Sun, Feb 20, 2022 at 7:14, Patrick Hibbs wrote:   
OK, where to begin.

As for your Gluster issue, Gluster maintains it's own copy of the
configuration for each brick outside of oVirt / VDSM. As you have
changed the network config manually, you also needed to change the
Gluster config to match as well. The fact that you haven't is the
reason why Gluster failed to restart the volume.

However, In a hyperconverged configuration, oVirt maintains the gluster
configuration in it's database. Manually fixing Gluster's configuration
on the bricks themselves won't fix the engine's copy. (Believe me, I
had to fix this before myself because I didn't use hostnames initially
for the bricks. It's a pain to manually fix the database.) That copy is
used to connect the VM's to their storage. If the engine's copy doesn't
match Gluster's config, you'll have a working Gluster volume but the
hosts won't be able to start VMs.

Essentially, in a hyperconverged configuration oVirt doesn't allow
removal of host with a Gluster brick unless removal of that host won't
break Gluster and prevent the volume from running. (I.e. you can't
remove a host if doing so would cause the volume to loose quorum.)

Your options for fixing Gluster are either:
    1. Add enough new bricks to the Gluster volumes so that
removal of an old host (brick) doesn't cause quorum loss.

    - OR -

    2. Manually update the engine's database with the engine and
all hosts offline to point to the correct hosts, after manually
updating the bricks and bringing back up the volume.

The first option is your safest bet. But that assumes that the volume
is up and can accept new bricks in the first place. If not, you could
potentially still do the first option but it would require reverting
your network configuration changes on each host first.

The second option is one of last resort. This is the reason why I said
updating the interfaces manually instead of using the web interface was
a bad idea. If possible, use the first option. If not, you'd be better
off just hosing the oVirt installation and reinstalling from scratch.

If you *really* need to use the second option, you'll need to follow
these instructions on each brick:
https://serverfault.com/questions/631365/rename-a-glusterfs-peer

and then update the engine database manually to point to the correct
hostnames for each brick. (Keep in mind I am *NOT* recommending that
you do this. This information is provided for educational /
experimental purposes only.)

As for Matthew's solution, the only reason it worked at all was because
you removed and re-added the host from the cluster. Had you not done
that, VDSM would have overwritten your changes on the next host upgrade
/ reinstall, and as you have seen that solution won't completely fix a
host in a hyperconverged configuration.

As to the question about oVirt's Logical Networks, what I meant was
that oVirt doesn't care what the IP configuration is for them, and that
if you wanted to change which network the roles used you needed to do
so elsewhere in the web interface. The only thing that does matter for
each role is that all of the clients using or hosts providing that role
can communicate with each other on that interface. (I.e. If you use
"Network Bob" for storage and migration, then all hosts with a "Network
Bob" interface must be able to communicate with each other over that
interface. If you use "Network Alice" for VM consoles, then all end-
user workstations must be able to commuicate with the "Network Alice"
interface. The exact IPs, vlan IDs, routing tables, and firewall
restrictions for a logical network don't matter as long as each role
can still reach the role on other hosts over the assigned interface.)

-Patrick Hibbs

On Sun, 2022-02-20 at 01:17 +, Abe E wrote:
> So upon changing my ovirt nodes (3Hyperconverged Gluster) as well as
> my engines hostname without a hitch I had an issue with 1 node and
> somehow I did something that broke its gluster and it wouldnt
> activate,
> So the gluster service wont start and after trying to open the node
> from webgui to see what its showing in its virtualization tab I was
> able to see that it allows me to run the hyperconverged wizard using
> the existing config. Due to this i lost the engine because well the
> 3rd node is just arbiter and node 2 complained about not having
> shared storage.
> 
> This node is the one which I built ovirt gluster from so i assumed it
> would rebuild its gluster.. i accidentally clicked cleanup which got
> rid of my gluster brick mounts :)) then I tried to halt it and
> rebuild using existing configuration. Here is my issue though, am I
> able to rebuild my node?
> 
> This is a new lab system so I believe i have all my vms still

[ovirt-users] VMs losing network interfaces

2022-02-20 Thread Jonathan Baecker

Hello everybody,

I have here a strange behavior: We have a 3 node self hosted cluster 
with around 20 VMs running on it. Since longer I had the problem with 
one VM that after some days it lose the network interface. But because 
this VM was only for testing I was to lazy to dive more in, to figure 
out what is happen.


Now I have a second VM, with the same problem and this VM is more 
important. Both VMs running debian 10 and use cifs mounts, so maybe that 
is related?


Have some one of you seeing this behavior? And can give me a hint, how I 
can fix this?


At the moment I can't provide a log file, because I didn't know the 
exact time, when this was happen. And I also don't know, if the problem 
comes from ovirt or from the operating system inside the VMs.


Have a nice day!

Jonathan

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DOXZRQ55LFPNKUVS3AWIPXQDJIVH3X7M/


[ovirt-users] Re: ovirtmgmt VLAN and IP Change

2022-02-20 Thread Strahil Nikolov via Users
You can try to edit the details in /var/lib/vdsm/persistence/netconf , but 
first make a backup of it.
 Best Regards,Strahil Nikolov
 
  On Sun, Feb 20, 2022 at 2:23, Abe E wrote:   Your method 
worked !
I additionally had to run the engine rename service script to change the 
hostname of the engine as well.
My only issue now is after readding all the nodes with new hostnames my main 
host node 1 will not allow me to remove and re-add as a new hostname as there 
is a gluster volume on it.

It will not let me remove to readd because theres a gluster on the volume, have 
you run into this?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BCVMP5ZLBLQENYTDTD4EJ555SFGHXX4B/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QUTP7K6QCG3K7PNYUYX4XFFZXD6V2HHH/