Yes, that's the usually recommended solution.
On Mon, Sep 30, 2024 at 2:29 PM devis--- via Users wrote:
> Hello Marcos,
>
> then what is the suggestions in case of a multi nodes cluster?
> I have to remove a node, perform a clean installation with el9 and
> reconnect it to cluster?
>
> Devis
>
If I remember correctly, the cockpit module to install self-hosted engine
graphically has been deprecated since at least 4.4 (but for sure in 4.5)
Since then, the cockpit interface is only useful to prepare the gluster
bricks. But note that gluster and hyperconverged self hosted engine are
also dep
We were just starting to depend on this workflow...
On Fri, Jan 26, 2024 at 2:02 PM Ewoud Kohl van Wijngaarden <
ewoud+ov...@kohlvanwijngaarden.nl> wrote:
> Hello everyone,
>
> Foreman is a bit late in updating Ruby to a newer version. Looking ahead
> we're aiming at Ruby 3.1+ but ovirt-engine-sd
Unless someone from the community steps up to take RedHat's role, there
won't be any 4.6
On Fri, Jan 12, 2024 at 8:51 AM Diggy Mc wrote:
>
> Isn't the oVirt 4.5 Hosted Engine built on CentOS Stream 8 ??? Stream 8
> ends in May 2024. I ask because we are still running on 4.4 and are
> thinking
I think it depends on the mac addresses.
If you see that the MAC addresses are not sequential, delete the NICs and
recreate them in the order you want them to be
it world for us
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Wed, May 24, 2023 at 1:46 AM Alan G wrote:
>
Hi,
Running a vm sharing compute resources from multiple nodes is not possible.
Moreover, Intel E8400 processors are too old, and not supported since oVirt
4.3
Best regards,
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Fri, Jan 13, 2023 at 8:01 PM Nathanaël Blanchet via
the limitation
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Wed, Jan 11, 2023 at 11:35 PM wrote:
> Hello,
>
> i cannot get sparsifying to work in lastest oVirt 4.5.4 release (not that
> i have tried with a previous release)
>
> I have two Data domains con
Sure,
I least for us that may prove to be useful!
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Thu, Nov 17, 2022 at 3:21 AM wrote:
> Hi, I got the same issue yesterday when I was trying to migrate a VM. The
> migration failed because of the expired certificate
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Wed, Jul 27, 2022 at 11:45 AM Guillaume Pavese <
guillaume.pav...@interactiv-group.com> wrote:
> Did anyone have the chance to look at this problem?
>
> It seems that it may be related to another problem we h
error that the import failed, and in the import
log we can see a similar "qemu-img: error while writing at byte xxx: No
space left on device"
Obviously, it is not a storage space problem as in both situations we are
using an iSCSI LUN with ample free space.
Best regards,
Guillaume Pavese
cockpit-ovirt-dashboard is deprecated, you should install with command line
on your host : ovirt-hosted-engine-setup
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Thu, Jul 21, 2022 at 7:10 PM less foobar via Users
wrote:
> When you try to follow the documentation for rhe
x27; execution was completed with VDSM
job status 'failed'
I do want the conversion from raw/sparse to qcow2/sparse to happen, as I
want to activate incremental backups.
I think that it may fail because the virtual size is bigger than the
initial size, as I think someone as explained o
for vdsm
sanlock is configured for vdsm
Current revision of multipath.conf detected, preserving
libvirt is already configured for vdsm
SUCCESS: ssl configured to true. No conflicts
Running configure...
Reconfiguration of passwd is done.
Reconfiguration of libvirt is done.
Done configuring modules t
-rhel
The standalone vdo command line tools are deprecated if I
remember correctly.
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Fri, Jul 1, 2022 at 11:23 PM Diego Ercolani
wrote:
> In data venerdì 1 luglio 2022 15:53:50 CEST, Michal Skrivanek ha scritto:
> > which
I can confirm that I had exactly the same problem on oVirt 4.4.10
As you said :
"The error suggest that the iSCSI portal port number (which is defaulted to
3260 in the UI) is not being properly passed into the python module
ovirt/ovirt/plugins/modules/ovirt_host.py"
Guillaume Pavese
- mount the image with guestmount
- do : echo "exclude=postgresql-jdbc" >> /etc/dnf/dnf.conf in the image
- rebuild the ova and point the installer to it when asked for a custom ova
path
This stopped the installation playbook from upgrading to the unsupported
postgresql-jdbc
these certificates?
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Mon, Jun 13, 2022 at 6:23 PM Guillaume Pavese <
guillaume.pav...@interactiv-group.com> wrote:
> Thanks for your answer, I checked but I am still stuck :
>
> I confirm that the servlet can be reache
t some point by following this
procedure
https://ovirt.org/documentation/administration_guide/index.html#Replacing_the_Manager_CA_Certificate
We also renewed the certificates during a standard engine --setup upgrade
to 4.4.10
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
ills appear on users' Option -> "User's
Public Key" in the engine's UI
What can I try to fix this?
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Mon, May 10, 2021 at 9:47 PM Nathanaël Blanchet wrote:
> Hi,
>
> I can't still connect t
Thank you Didi,
I can confirm that engine-setup detected the approaching expiration of the
engine certificate and proposed to renew it.
We'll try proposing a documentation fix
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Mon, Apr 25, 2022 at 9:02 PM Yedidyah Bar
/documentation/administration_guide/index.html#Replacing_the_Manager_CA_Certificate
Doc RHV :
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/administration_guide/index#Replacing_the_Manager_CA_Certificate
Any help ?
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-G
investigate whether oVirt on Ceph in HCI is doable.
Thank for any feedback
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Mon, Feb 7, 2022 at 10:47 PM Nir Soffer wrote:
> On Mon, Feb 7, 2022 at 3:04 PM Sandro Bonazzola
> wrote:
>
>>
>>
>> Il giorno
Should we rethink it?
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Fri, Feb 4, 2022 at 4:49 PM Sandro Bonazzola wrote:
>
>
> Il giorno ven 4 feb 2022 alle ore 08:19 Strahil Nikolov <
> hunter86...@yahoo.com> ha scritto:
>
>> Hi Sandro,
>>
>&g
ived from engine.fqdn:443 is the old one, not the new
custom one.
Are there missing steps in the above procedures?
Best,
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Tue, Jan 25, 2022 at 1:57 AM Sandro Bonazzola
wrote:
>
>
> Il giorno dom 23 gen 2022 alle ore 08:1
Hello,
You mention BIOS but as the doc indicates, TPM is only supported on UEFI :
"TPM devices can only be used on x86_64 machines with UEFI firmware and
PowerPC machines with pSeries firmware installed."
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Thu, Oct 21,
impressive, and the underlying
qemu/libvirt bugs are now fixed or were close to be.
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Thu, Sep 9, 2021 at 7:00 PM Strahil Nikolov via Users
wrote:
> Dis you enable libgfapi ?
> engine-config -s LibgfApiSupported=true
>
&g
I opened : https://bugzilla.redhat.com/show_bug.cgi?id=1975076
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Tue, Jun 22, 2021 at 12:26 PM Strahil Nikolov
wrote:
> Error during ValidateFailure.: java.lang.IllegalArgumentExcepti
> on: VM64BitMaxMemorySizeInMB has no
ower compatibility version but only 4.6is available
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Tue, Jun 15, 2021 at 8:11 PM Guillaume Pavese <
guillaume.pav...@interactiv-group.com> wrote:
> On ovirt 4.4.6, I created a VM and made a template from it.
> I would like to co
ou will not be able to attach it back to an older Data Center."
In "Administration > Providers" everything seems supported, from XEN / KVM
and even VMware (we have used all three with some success).
However oVirt is the obvious missing option...
I still have to try "Export as
ateFailure.: java.lang.IllegalArgumentExcepti
on: VM64BitMaxMemorySizeInMB has no value for version: 4.6
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Wed, Jun 16, 2021 at 4:54 PM Guillaume Pavese <
guillaume.pav...@interactiv-group.com> wrote:
> OVA imports, template o
Hello,
I had exactly the same problem.
I Started the grafana service before setting it up with engine-setup on a
Hosted Engine. and was unable to get to the login page.
I followed the proposed workaround : stopped grafana, removed
/var/lib/grafana/grafana.db and ran engine-setup
--reconfigure
at
org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1377)
at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_292]
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Wed, Jun 16, 2021 at 7:40 PM Guillaume Pavese <
guillaume.pav...@interactiv-gr
, I had to manually rediscover
the targets
After all that, I could finally do a successful Host Reinstall from oVirt.
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Tue, Jun 8, 2021 at 2:24 PM Yedidyah Bar David wrote:
> On Tue, Jun 8, 2021 at 8:01 AM Guillaume Pavese <
>
ation fails?
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Fri, Jun 4, 2021 at 8:44 PM Lev Veyde wrote:
> Hi Guillaume,
>
> Have you moved the host to the maintenance before the upgrade (making sure
> that Gluster related options are unchecked)?
>
> Or
31 ps-inf-prd-kvm-fr-510.hostics.fr vdsm[54100]: WARN Worker
blocked:
File:
"/usr/lib64/python3.6/threading.py", line 884, in _bootstrap
self._bootstrap_inner()
Retrying to manually stop vdsmd a second time then seems to wo
in
your vm definition file :
In /etc/xen/vmname put the missing line with the following format:
uuid = "7ecf52a5-3657-fc7a-b2e2-66edd4cf1b6c"
The vm should now be visible by libvirt when doing 'virsh list --all'
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
nstall a non xen kernel in the vm,
- and have disk listed by UUIDs in /etc/fstab (maybe not necessary this
last one)
Good luck,
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Wed, Apr 28, 2021 at 7:38 PM Strahil Nikolov via Users
wrote:
> I think that you have to expor
cking bug for a long
time" & "not enough perf anyway".
So,
since blocking bugs have at last been resolved,
and since different users report seeing strong performance gains contrary
to what has been tested by RedHat,
it seems justified to reevaluate the situation.
Best,
Guillaume
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Thu, Feb 11, 2021 at 4:44 PM Guillaume Pavese <
guillaume.pav...@interactiv-group.com> wrote:
> I strongly invite you to post those results in RedHat's Buzilla entries
>
> Guillaume Pavese
> Ingénieur Système et Ré
I strongly invite you to post those results in RedHat's Buzilla entries
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Wed, Feb 10, 2021 at 6:55 PM wrote:
> Hey everyone,
>
> Couple of months ago i benchmarked FUSE, libgfapi performance. If read
> spee
benchmark results should post
them in :
https://bugzilla.redhat.com/show_bug.cgi?id=1484227
https://bugzilla.redhat.com/show_bug.cgi?id=1465810
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Thu, Dec 19, 2019 at 11:44 AM Jayme wrote:
> It would be nice to see some progress
de needed info."
Is my understanding correct that only RedHat employees can reopen closed
bugs? I have encountered quite a lot of situations where I'm facing an
issue covered by such a closed bug with a "please reopen if you can provide
info" msg, but frustratingly not being a
om/show_bug.cgi?id=1633642 : "Closing this as no
action taken from long back.Please reopen if required."
Would be nice if someone could reopen the closed bugs so this feature
doesn't get forgotten
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Tue, Feb 11, 2020 at 9:58
Mode 4 requires support and configuration on your switches
Mode 1 should be the most compatible one ; I think Mode 2 supports tagged
vlan too but I had some problems when I tried it
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Thu, Feb 6, 2020 at 5:45 AM wrote:
> I hav
Yes, that's indeed the case for teaming. I think NetworkManager isn't
supported either.
I use bonding with vlan set by initscripts
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Thu, Feb 6, 2020 at 2:48 AM wrote:
> Thank you Guillaume! My mistake. Resolved.
according to
https://github.com/ansible/ansible/pull/66859/commits/5e021952f5ef26b7ad8490152820e97673433f79
There is only 1 line to insert, not 3 : if
self.param('external_provider'):
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Wed, Feb 5, 2020 at 7:12 AM wrote
or is not available: {'method':
u'GlusterService.action'}
Failed to autorecover Host ps-inf-prd-kvm-fr-101.hostics.fr.
Could not find gluster uuid of server ps-inf-prd-kvm-fr-101.hostics.fr on
Cluster CLUSTER_FR1.
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Wed,
is empty.
Is that normal? Any idea if there is anything to do to see the gluster
volumes there and manage theirs options?
Best,
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
--
Ce message et toutes les pièces jointes (ci-après le “message”) sont
établis à l’intention exclusive de s
That was it!
Thanks for your help
Best regards,
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Wed, Jan 29, 2020 at 7:42 PM Martin Necas wrote:
> Hi,
>
> this issue was already submitted and I created the patch and already done
> backport for it.
> You can
To answer your question,
I used cockpit hosted engine deployment and "hosted-engine --deploy" with
default settings
Best,
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Wed, Jan 29, 2020 at 7:36 PM Guillaume Pavese <
guillaume.pav...@interactiv-group.com> wrot
[],
"stdout": "192.168.222.15",
"stdout_lines": [
"192.168.222.15"
]
}"
2020-01-29 09:02:51,606+0100 INFO ansible ok {'status': 'OK',
'ansible_type': 'task', 'ansible_task': u'Get lo
t;: "Default",
"description": null,
"external_provider": null,
"fetch_nested": false,
"id": null,
"label": null,
"mtu": null,
"name": "ovir
09:12:35,972+0100 ERROR
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:107 Exception: Entity 'None' was not found.
2020-01-29 09:12:36,073+0100 ERROR
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:107 fatal: [localhost]: FAILED! =&
nged": false, "msg": "Entity
'None' was not found."}
Any idea?
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
--
Ce message et toutes les pièces jointes (ci-après le “message”) sont
établis à l’intention exclusive de ses destinataires et sont confide
Hi,
I use something similar
However, I think that the correct scheduler in case of virtio-scsi devices
(sd*) should be "noop" instead of "none"
Best,
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Tue, Dec 31, 2019 at 9:27 PM Strahil wrote:
> You ca
libgfapi working in a replica 3 cluster.
See : https://bugzilla.redhat.com/show_bug.cgi?id=1465810
I wish someone would reopen those closed bugs in order for that issue not
being forgotten.
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Tue, Dec 17, 2019 at 7:21 AM Jayme wrote:
>
Could it be a rights problem, ie your awx user can not access
to /opt/my-envs?
You could try to create the ovirt virtualenv in the default path :
/var/lib/awx/venv/
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Wed, Dec 4, 2019 at 5:32 PM Gianluca Cecchi
wrote:
> On
My problem was that I did not fully follow your directions, and had not pip
installed ovirt-engine-sdk-python in the virtualenv.
Inventory is syncing now in awx.
Thanks for the help!
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Fri, Nov 29, 2019 at 8:07 PM Nathanaël
wx/venv/ovirt-p2/lib/python2.7/site-packages/ansible/plugins/inventory/script.py",
line 161, in parse
raise AnsibleParserError(to_native(e))
[WARNING]: Unable to parse /opt/rh/rh-python36/root/usr/lib/python3.6/site-
packages/awx/plugins/inventory/ovirt4.py as an inventory source
Guil
I think you can do a migration from one ovirt env to another with ManageIQ
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Thu, Nov 14, 2019 at 2:03 PM Alex K wrote:
>
>
> On Thu, Nov 14, 2019, 01:25 wrote:
>
>> Thank Jayme,
>> It worked, wond
so.
Best
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Tue, Oct 1, 2019 at 6:07 PM Strahil wrote:
> You can go with 512 emulation and later you can recreate the brick without
> that emulation (if there are benefits of doing so).
> After all, you gluster is either rep
,
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Fri, Sep 27, 2019 at 3:19 PM Sandro Bonazzola
wrote:
>
>
> Il giorno ven 27 set 2019 alle ore 05:21 Guillaume Pavese <
> guillaume.pav...@interactiv-group.com> ha scritto:
>
>> I see that oVirt 4.3.6 fi
tion?
- Should we expect performance increase by using the native 4k block size
of VDO?
Thanks
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Fri, Sep 27, 2019 at 12:00 AM Sandro Bonazzola
wrote:
> The oVirt Project is pleased to announce the general availability of oVirt
Yes, I had those same messages on 4.3.5 upgraded from 4.3.4
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Thu, Sep 19, 2019 at 5:22 PM Strahil wrote:
> Yeah, I've noticed I was searching for the wrong library (patching from
> phone seems to be not such a
"yum provides */libibverbs.so" will tell you that this file is provided
by rdma-core-devel
So to fix this messages when you login, you can do : "yum install
rdma-core-devel"
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Thu, Sep 19, 2019 at 1:56 AM Stra
File:
"/usr/lib64/python2.7/site-packages/subprocess32.py", line 1706, in
_communicate
orig_timeout)
File:
"/usr/lib64/python2.7/site-packages/subprocess32.py", line 1779, in
_communi
H6amBJTcjHQdipFTMmukXlzV-_7mavFF0XazAoSIR3-6bTa8AmDTG5NNVFiNPNw',
'ca_file': None} host=vs-inf-int-kvm-fr-304-210.hostics.fr
nested_attributes=[]
mars 13 09:18:23 vs-inf-int-kvm-fr-304-210.hostics.fr iscsid[4898]:
Connection2:0 to [target:
iqn.2000-01.com.synology:SVC-STO-FR-301.Targ
yes :
"Package iscsi-initiator-utils-6.2.0.874-10.el7.x86_64 already installed
and latest version"
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Tue, Mar 12, 2019 at 11:54 PM Strahil Nikolov
wrote:
> Do you have the iscsi-initiator-utils rpm installed ?
>
TART,
GetDeviceListVDSCommand(HostName = ps-inf-int-kvm-fr-305-210.hostics.fr,
GetDeviceListVDSCommandParameters:{hostId='6958c4f7-3716-40e4-859a-bfce2f6dbdba',
storageType='ISCSI', checkStatus='false', lunIds='null'}), log id: 539fb345
2019-03-12 14:33:36,995+01
;s what virt-manager is
doing seamlessly when no graphic hw is configured for the vm).
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Tue, Mar 5, 2019 at 11:39 PM Victor Toso wrote:
> Hi Jean,
>
> On Tue, Mar 05, 2019 at 02:20:59PM -, jeanbapti...@nfrance.com wro
84] E [MSGID: 101191]
[event-epoll.c:765:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler
(END)
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Mon, Mar 4, 2019 at 3:56 AM Endre Karlson
wrote:
> I have tried bumping to 5.4 now and still getting alot of &quo
baseurl=
https://cbs.centos.org/repos/storage7-gluster-6-testing/os/$basearch/
enabled=1
#metadata_expire=60m
gpgcheck=0
GLHF
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Sun, Mar 3, 2019 at 6:16 AM Endre Karlson
wrote:
> Hi, should we downgrade / reinstall our cluster?
Not sure which is better as of now.
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy
Thank you very much for your answer. I am copying ovirt-devel too as that
could be of interest to them.
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Sat, Mar 2, 2019 at 7:00 AM Michael Sclafani wrote:
> Hi!
>
> 512 emulation was intended to support drivers that
andard one?
Thanks,
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code
your help.
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Fri, Mar 1, 2019 at 5:15 PM Simone Tiraboschi
wrote:
>
>
> On Fri, Mar 1, 2019 at 6:49 AM Sahina Bose wrote:
>
>> On Wed, Feb 27, 2019 at 4:06 PM Guillaume Pavese
>> wrote:
>> >
>>
t-ovt-fr-301-210.hostics.fr
traceroute to vs-inf-int-ovt-fr-301-210.hostics.fr (192.168.122.147), 30
hops max, 60 byte packets
1 vs-inf-int-kvm-fr-301-210.hostics.fr (192.168.122.1) 3006.344 ms !H
3006.290 ms !H 3006.275 ms !H
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
___
followinf so libvirt would be happy again :
rm -rf /etc/libvirt/storage/*.xml
rm -rf /etc/libvirt/storage/autostart/*
rm -rf /var/tmp/local*
ovirt-hosted-engine-cleanup is not doing a really good job
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Tue, Feb 26, 2019 at 3:49
id [bit 10]
2019-02-25T17:50:08.919217Z qemu-kvm: warning: host doesn't support
requested feature: CPUID.07H:EBX.invpcid [bit 10]
I guess there is something about those last warnings?
It should be noted that I previously successfully deployed oVirt 4.2 in
the same Nested environment
Run
---
[root@vs-inf-int-kvm-fr-301-210 ~]#
I did not see any relevant log on the HE vm. Is there something I should
look for there?
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Tue, Feb 26, 2019 at 3:12 AM Simone Tiraboschi
wrote:
>
>
> On Mo
st that the playbook tries still fails with empty result :
[root@vs-inf-int-kvm-fr-301-210 ~]# virsh -r net-dhcp-leases default
Expiry Time MAC addressProtocol IP address
HostnameClient ID or DUID
------
ress
HostnameClient ID or DUID
---
[root@vs-inf-int-kvm-fr-301-210 ~]#
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Tue, Feb 26, 2019 at 12:33 AM Simone Tiraboschi
wrote:
> OK, try this:
> temporary
> edit
> /usr/share/ansible/roles/ovirt.hosted_engine_s
el 3: new [direct-tcpip]
channel 3: open failed: connect failed: Connection refused
debug1: channel 3: free: direct-tcpip: listening port 5900 for
vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from 127.0.0.1 port
37002 to 127.0.0.1 port 5900, nchannels 4
Guillaume Pavese
Ingénieur Systèm
Something was definitely wrong ; as indicated, qemu process
for guest=HostedEngineLocal was running but the disk file did not exist
anymore...
No surprise I could not connect
I am retrying
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Mon, Feb 25, 2019 at 11:15 PM Guillaume
journalctl :
févr. 25 14:55:38 vs-inf-int-kvm-fr-301-210.hostics.fr sshd[19595]: error:
connect_to vs-inf-int-kvm-fr-301-210.hostics.fr port 5900: failed.
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Mon, Feb 25, 2019 at 10:44 PM Simone Tiraboschi
wrote:
>
>
>
>
hrough...*]
^C
For making sure :
[gpavese@sheepora-X230 ~]$ telnet vs-inf-int-kvm-fr-301-210.hostics.fr
*9090*
Trying 10.199.210.11...
*Connected* to vs-inf-int-kvm-fr-301-210.hostics.fr.
Escape character is '^]'.
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Mo
inf-int-kvm-fr-301-210 ~]# cat
/etc/libvirt/qemu/networks/default.xml
default
ba7bbfc8-28b8-459e-a42d-c2d6218e2cb6
You have new mail in /var/spool/mail/root
[root@vs-inf-int-kvm-fr-301-210 ~]
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On
*good* idea? Is that even possible/ something that people
do?
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Sat, Feb 23, 2019 at 2:51 AM Jayme wrote:
> Personally I feel like raid on top of GlusterFS is too wasteful. It would
> give you a few advantages such as being a
opi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:107 fatal: [localhost]: FAILED! =>
{"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default
| grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'
As indicated on Trello,
HE deployment though cockpit is stuck at the beginning with "Please correct
errors before moving to next step", but no Error is explicitly shown or
highlighted.
Guillaume Pavese
Ingénieur Système et Réseau
Intera
seems that LV Cache is its own source of bugs and problems anyway, so we
are thinking going for full NVME drives when buying the production cluster.
What would the recommandation be in that case, JBOD or RAID?
Thanks
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
I am on the Trello board, but I can not create cards
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Wed, Feb 20, 2019 at 10:05 PM Sandro Bonazzola
wrote:
>
>
> Il giorno mar 12 feb 2019 alle ore 10:49 Sandro Bonazzola <
> sbona...@redhat.com> ha scritto:
asap without VDO.
Should we instead force the creation of VDO volmume on ThinLV by tweaking
the gdeploy configuration and applying the mentionned workaround?
Thanks
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
___
Users mailing list
asap without VDO.
Should we instead force the creation of VDO volmume on ThinLV by tweaking
the gdeploy configuration and applying the mentionned workaround?
Thanks
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
___
Users mailing list
I managed to log back in by updating to ovirt-engine-wildfly-15.0.1-1
But now, I have the original vm disk and its clone in Locked State
There are no task listed in the UI, neither in vdsm-client :
On SPM Host :
vdsm-client Host getAllTasksInfo
{}
If I try to
Additional info
On hosted-engine, engine-setup is stuck with :
[ INFO ] Cleaning async tasks and compensations
The following system tasks have been found running in the system:
Task ID: cae248f7-781e-4861-bfbe-89d6e654a996
Task Name: Unknown
Hello,
Fresh 4.3 Cluster :
I imported a 4.2 export domain, copied the vm on the newly provisioned gluster
Volume with VDO
I then tried to create a clone of a 50GB thin provisionning vm anddisconnected
This morning I can not log in Engine Manager even after rebooting hosted-engine
: I get this me
Ok, thanks for for your input :)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/co
Hi,
If possible, I would like to configure a hyperconverged cluster with hosts'
system partitions and the gluster's lvcache on partitions of SSDs in RAID1.
Is it possible to define the lvcache on a partitioned device like /dev/sdaX ;
or is it only supported to pass a whole device like /dev/sdb ?
Hi,
I am evaluating vdo for an hyperconverged ovirt cluster.
The volume is on RAID device supported by a LSI (Dell Perc H710) controller
with Write Back cache enabled :
Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK
However these LSI controllers (all PERC series on
1 - 100 of 103 matches
Mail list logo