oVirt 4.4 requires EL8.2 , so no you cannot go to 4.4 without upgrading the OS
to EL8.
Yet, you can still bump the version to 4.3.10 which is still EL7 based and it
works quite good.
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 17:39:52 Гринуич+3,
написа:
Hi
terfs 2.4T 535G 1.9T
23% /rhev/data-center/mnt/glusterSD/gluster1:_data
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 19:44:54 Гринуич+3, Jeremey Wise
написа:
Yes.
And at one time it was fine. I did a graceful shutdown.. and after booting it
always seems to now
r-ovn.cer
Happy Hunting!
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 21:52:10 Гринуич+3, Philip Brown
написа:
More detail on the problem.
after starting remote-viewer --debug, I get
(remote-viewer.exe:18308): virt-viewer-DEBUG: 11:45:30.594: New spice channel
000
DC_WD15EADS-00P8B0_WD-WMAVU0115133'
Of course if you are planing to use only gluster it could be far easier to set:
[root@ovirt conf.d]# cat /etc/multipath/conf.d/blacklist.conf
blacklist {
devnode "*"
}
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г.
Most probably there is an option to tell it (I mean oVIrt) the exact keys to be
used.
Yet, give the engine a gentle push and reboot it - just to be sure you are not
chasing a ghost.
I'm using self-signed certs and I can't help much in this case.
Best Regards,
Strahil Nikolov
In my setup , I got no filter at all (yet, I'm on 4.3.10):
[root@ovirt ~]# lvmconfig | grep -i filter
[root@ovirt ~]#
P.S.: Don't forget to 'dracut -f' due to the fact that the initramfs has a
local copy of the lvm.conf
Best Regards,
Strahil Nikolov
В вторник, 22 септ
CLI shows. Can someone show me where to get logs? the GUI log >when I
>try to "activate" thor server "Status of host thor was set to
>>NonOperational." "Gluster command [] failed on server >."
>i
ata via 'mkfs.xfs
-i size=512 /dev/block/device'.
Once all volumes are again a replica 3 , just wait for the healing to go over
and you can proceed with the oVirt part.
Best Regards,
Strahil Nikolov
В сряда, 23 септември 2020 г., 20:45:30 Гринуич+3, Vincent Royer
написа
As far as I know there is an automation to do it for you.
Best Regards,
Strahil Nikolov
В сряда, 23 септември 2020 г., 21:41:13 Гринуич+3, Vincent Royer
написа:
well that sounds like a risky nightmare. I appreciate your help.
Vincent Royer
778-825-1057
SUSTAINABLE MOBILE ENERGY
I guess 'yum reinstall vdsm-gluster'.
Best Regards,
Strahil Nikolov
В сряда, 23 септември 2020 г., 22:07:58 Гринуич+3, Jeremey Wise
написа:
Trying to repair / clean up HCI deployment so it is HA and ready for
"production".
I have gluster now showing thr
Once a host is in oVirt , you should not change the network ... or that's what
I have been told.
You should remove the host from oVirt , do your configurations and then add the
host back.
Best Regards,
Strahil Nikolov
В четвъртък, 24 септември 2020 г., 01:43:40 Гринуич+3, wodel y
ed engine and issue a
'reboot' and the ovirt-ha-agent on one of the hosts will bring it up, or use
the 'hosted-engine' utility to shutdown and power up the VM.
About the engine not detecting a node up - check if the vdsm.service is running
on the node.
Best Regards,
Stra
Have you checked the oVirt 2020 conference videos ?
There was a slot exactly on this topic- I think ansible was used for automatic
upgrade.
I prefer the manual approach , as I have full control over the environment.
Best Regards,
Strahil Nikolov
В петък, 25 септември 2020 г., 02:23:49
hout password.
Engine is not running in the host, it is running in a VM called HostedEngine
and that VM has to be able to reach the host over ssh.
Did you do any ssh hardening ?
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To
(or whatever libvirt puts it).
>3) I know that you can backup the engine. If I had been a smart person, >how
>does one backup and recover from this kind of situation. Does >anyone have
>any guides or good articles on this?
https://www.o
l
gluster volume heal full
10. Repeat again and remember to never wipe 2 nodes at a time :)
Good luck and take a look at Quick Start Guide - Gluster Docs
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe se
Since oVirt 4.4 , the stage that deploys the oVirt node/host is adding an lvm
filter in /etc/lvm/lvm.conf which is the reason behind that.
Best Regards,
Strahil Nikolov
В петък, 25 септември 2020 г., 20:52:13 Гринуич+3, Staniforth, Paul
написа:
Thanks,
the gluster
Importing is done from UI (Admin portal) -> Storage -> Domains -> Newly Added
domain -> "Import VM" -> select Vm and you can import.
Keep in mind that it is easier to import if all VM disks are on the same
storage domain (I've opened a RFE for multi-domain im
Hi Jeremey,
I am not sure that I completely understand the problem.
Can you provide the Host details page from UI and the output of:
'gluster pool list' & 'gluster peer status' from all nodes ?
Best Regards,
Strahil Nikolov
В събота, 26 септември 2020 г., 20:31:23
Regards,
Strahil NIkolov
В събота, 26 септември 2020 г., 21:44:28 Гринуич+3, matthew.st...@fujitsu.com
написа:
I have created a three host oVirt cluster using 4.4.2.
I created an ISO storage domain to hold my collection of ISO images, and then
decided to migrate it to a
- Memory Overcommitment Manager Daemon
Loaded: loaded (/usr/lib/systemd/system/momd.service; static; vendor preset:
disabled)
Active: inactive (dead)
What is the status of mom-vdsm.service ?
Best Regards,
Strahil Nikolov
В неделя, 27 септември 2020 г., 10:06:39 Гринуич+3, duhongyu
написа
You cannot have 2 IPs for 2 different FQDNs.
You have to use something like:
172.16.100.101 thor.penguinpages.local thor thorst
Fix your /etc/hosts or you should use DNS.
Best Regards,
Strahil Nikolov
В понеделник, 28 септември 2020 г., 03:41:17 Гринуич+3, Jeremey Wise
написа:
when
having a
'replica 3 arbiter 1' volume.
Best Regards,
Strahil Nikolov
В понеделник, 28 септември 2020 г., 20:46:16 Гринуич+3, Jayme
написа:
It might be possible to do something similar as described in the documentation
here:
https://access.redhat.com/documenta
too slow.
My VMs have 4 disks in a raid0 (boot) and striped LV (for "/").
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/pri
gards,
Strahil Nikolov
В вторник, 29 септември 2020 г., 16:36:10 Гринуич+3, C Williams
написа:
Hello,
We have decided to get a 6th server for the install. I hope to set up a 2x3
Distributed replica 3 .
So we are not going to worry about the "5 server" situation.
Thank You Al
I got the same behaviour with adblock plus add-on.
Try in incognito mode (or with disabled plugins/ new fresh browser).
Best Regards,
Strahil Nikolov
В вторник, 29 септември 2020 г., 18:50:05 Гринуич+3, Philip Brown
написа:
I have an odd situation:
When I go to
https://ovengine
Are you trying to use the same storage domain ?
I hope not, as this is not supposed to be done like that.As far as I remember -
you need fresh storage.
Best Regards,
Strahil NIkolov
В вторник, 29 септември 2020 г., 20:07:51 Гринуич+3, Sergey Kulikov
написа:
Hello, I'm tryi
You can use this ansible module and assign your scheduling policy:
https://docs.ansible.com/ansible/latest/collections/ovirt/ovirt/ovirt_cluster_module.html
Best Regards,
Strahil Nikolov
В сряда, 30 септември 2020 г., 11:36:01 Гринуич+3, Kushagra Agarwal
написа:
I was hoping if i
In your case it seems reasonable, but you should test the 2 stripe sizes (128K
vs 256K) before running in production. The good thing about replica volumes is
that you can remove a brick , recreate it from cli and then add it back.
Best Regards,
Strahil Nikolov
В сряда, 30 септември 2020 г
ant
3. Test the playbook
4. Create a oneshot systemd servce that starts after 'ovirt-engine.service' and
runs your playbook
Best Regards,
Strahil Nikolov
В сряда, 30 септември 2020 г., 18:27:13 Гринуич+3, Jeremey Wise
написа:
When I have to shut down cluster... ups runs out
If you can do it from cli - use the cli as it has far more control over what
the UI can provide.
Usually I use UI for monitoring and basic stuff like starting/stopping the
brick or setting the 'virt'group via the 'optimize for Virt' (or whatever it
was called).
Best Rega
Also consider setting a reasonable 'TimeoutStartSec=' in your systemd service
file when you create the service...
Best Regards,
Strahil Nikolov
В сряда, 30 септември 2020 г., 20:18:01 Гринуич+3, Strahil Nikolov via Users
написа:
I would create an ansible playbook th
#x27;
- name: Power on {{ outer_item }} after snapshot restore
ovirt_vm:
auth: "{{ ovirt_auth }}"
state: running
name: "{{ item }}"
loop:
- VM1
- VM2
Yeah, you have to fix the tabulations (both Ansible and Python are pain in the
*** )
Best
As I mentioned, I would use systemd service to start the ansible play (or a
script running it).
Best Regards,
Strahil Nikolov
В сряда, 30 септември 2020 г., 22:15:17 Гринуич+3, Jeremey Wise
написа:
i would like to eventually go ansible route.. and was starting down that
path
s :)
Best Regards,
Strahil Nikolov
В сряда, 30 септември 2020 г., 22:55:40 Гринуич+3, Jeremey Wise
написа:
As the three servers are Centos8 minimal installs. + oVirt HCI wizard to keep
them lean and mean... a couple questions
1) which version of python would I need for this (note in sc
ation.
Best Regards,
Strahil Nikolov
В четвъртък, 1 октомври 2020 г., 07:36:24 Гринуич+3, Jeremey Wise
написа:
I have for many years used gluster because..well. 3 nodes.. and so long as I
can pull a drive out.. I can get my data.. and with three copies.. I have much
higher chance of getti
meters_managing-monitoring-and-updating-the-kernel
https://access.redhat.com/solutions/3710121
Best Regards,
Strahil Nikolov
В четвъртък, 1 октомври 2020 г., 16:12:52 Гринуич+3, Mike Lindsay
написа:
Hey Folks,
I've got a bit of a strange one here. I downloaded and inst
Based on
'https://docs.ansible.com/ansible/latest/collections/ovirt/ovirt/ovirt_cluster_module.html'
there is option 'scheduling_policy' & 'scheduling_policy_properties' .
Maybe that was recently introduced.
Best Regards,
Strahil Nikolov
В четвъртък, 1
What kind of setting do you want to change ?
Maybe I misunderstood you wrong. The 'scheduling_policy' requires a predefined
scheduling policy and 'scheduling_policy_properties' allows you to override the
score of a setting (like 'Memory').
Best Regards,
Strahi
Verify that your host is really down (or at least rebooted) and then in the UI
you can 'confirm: Host has been rebooted' from the dropdown.
This should mark all your VMs as dead .
Best Regards,
Strahil Nikolov
В петък, 2 октомври 2020 г., 12:03:31 Гринуич+3, Vrgotic, Mark
Have you tried to set the host into maintenance and then "Enroll Certificates"
from the UI ?
Best Regards,
Strahil Nikolov
В петък, 2 октомври 2020 г., 12:27:19 Гринуич+3, momokch--- via Users
написа:
hello everyone,
my ovirt-engine and host certification is expired,
Best Regards,
Strahil Nikolov
В събота, 3 октомври 2020 г., 16:50:24 Гринуич+3, Michael Jones
написа:
to get these two hosts into a cluster would i need to castrate them down
to nehalem, or would i be able to botch the db for the 2nd host from
"EPYC-IBPB" to "Opteron_G5&qu
>And of course I want Gluster to switch between single node, replication >and
>dispersion seemlessly and on the fly, as well as much better >diagnostic tools.
Actually Gluster can switch from distributed to
replicated/distributed-replicated on the fly.
Best Regards,
Str
I would put it in the yum.conf and export it as "http_proxy" & "https_proxy"
system variables.
Best Regards,
Strahil Nikolov
В вторник, 6 октомври 2020 г., 12:39:22 Гринуич+3, Gianluca Cecchi
написа:
Hello,
I'm testing upgrade from 4.3.10 to 4.4.2
Hello All,
can someone send me the full link (not the short one) as my proxy is blocking
it :)
Best Regards,
Strahil Nikolov
В вторник, 6 октомври 2020 г., 15:26:57 Гринуич+3, Sandro Bonazzola
написа:
Just a kind reminder about the survey (https://forms.gle/bPvEAdRyUcyCbgEc7
Hi Michael,
I'm running 4.3.10 and I can confirm that Opteron_G5 was not removed.
What is reported by 'virsh -c
qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf capabilities'
on both hosts ?
Best Regards,
Strahil Nikolov
В сряда, 7 октомври 2020 г., 00:
?
- Have you check the gluster cluster's logs for anything meaningful ?
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-p
n the virt group (/var/lib/glusterd/group/virt - or somethinhg like
that).
Best Regards,
Strahil Nikolov
В четвъртък, 8 октомври 2020 г., 15:16:10 Гринуич+3, Jarosław Prokopowski
написа:
Hi Guys,
I had a situation 2 times that due to unexpected power outage something went
wrong an
I have seen many "checks" that are "OK"...
Have you checked that backups are not used over the same network ?
I would disable the power management (fencing) ,so I can find out what has
happened to the systems.
Best Regards,
Strahil Nikolov
В четвъртък, 8 октомв
Hi Simon,
I doubt the system needs tuning from network perspective.
I guess you can run some 'screen'-s which a pinging another system and logging
everything to a file.
Best Regards,
Strahil Nikolov
В петък, 9 октомври 2020 г., 01:05:22 Гринуич+3, Simon Scott
написа:
Based on the logs you shared, it looks like a network issue - but it could
always be something else.
If you ever experience something like that situation, please share the logs
immediately and add the gluster mailing list - in order to get assistance with
the root cause.
Best Regards,
Strahil
I guess you tried to ssh to the HostedEngine and then ssh to the host , right ?
Best Regards,
Strahil Nikolov
В събота, 10 октомври 2020 г., 02:28:35 Гринуич+3, Gianluca Cecchi
написа:
On Fri, Oct 9, 2020 at 7:12 PM Martin Perina wrote:
>
>
> Could you please share wi
ftp://ftp.redhat.com/redhat/linux/enterprise/7Server/en/RHS/SRPMS/
They control the behaviour of the system when to start flushing memory to disk
and when to block any process until all memory is flushed.
Best Regards,
Strahil Nikolov
В събота, 10 октомври 2020 г., 18:18:55 Гринуич+
Hi Jiri,
I already opened an Feature request
https://bugzilla.redhat.com/show_bug.cgi?id=1881457 that is about something
similar.
Can you check if your case was similar and update the request ?
Best Regards,
Strahil Nikolov
В събота, 10 октомври 2020 г., 23:48:01 Гринуич+3, Jiří Sléžka
n the same network ?
What about dns resolution - do you have entries in /etc/hosts ?
Best Regards,
Strahil Nikolov
В неделя, 11 октомври 2020 г., 11:54:47 Гринуич+3, Simon Scott
написа:
Thanks Strahil.
I have found between 1 & 4 Gluster peer rpc-clnt-ping timer expired message
isable_http_filter=1
Extra info can be found at: https://access.redhat.com/solutions/3093891
Best Regards,
Strahil Nikolov
В неделя, 11 октомври 2020 г., 18:41:25 Гринуич+3, Jeremey Wise
написа:
I have a pair of nodes which service DNS / NTP / FTP / AD /Kerberos / IPLB etc..
ns01, ns02
Th
M".
Also it's suitable to start a VM during Engine's downtime.
Best Regards,
Strahil Nikolov
В понеделник, 12 октомври 2020 г., 13:36:31 Гринуич+3, Budur Nagaraju
написа:
Hi
Is there a way to deploy vms on the ovirt node
I have seen a lot of users to use anonguid=36,anonuid=36,all_squash to force
the vdsm:kvm ownership on the system.
Best Regards,
Strahil Nikolov
В понеделник, 12 октомври 2020 г., 21:40:42 Гринуич+3, Amit Bawer
написа:
On Mon, Oct 12, 2020 at 9:33 PM Amit Bawer wrote:
>
>
direct' and Direct I/O , so you should not
loose any data.
Best Regards,
Strahil Nikolov
В вторник, 13 октомври 2020 г., 16:35:26 Гринуич+3, Jarosław Prokopowski
написа:
Hi Nikolov,
Thanks for the very interesting answer :-)
I do not use any raid controller. I was hoping
only 'replica 3' volumes - just to keep that in
mind.
From my perspective , JBOD is suitable for NVMEs/SSDs while spinning disks
should be in a raid of some type (maybe RAID10 for perf).
Best Regards,
Strahil Nikolov
В сряда, 14 октомври 2020 г., 06:34:17 Гринуич+3, C William
f building gluster volumes ,as UI's primary focus is oVirt (quite
natural , right).
Best Regards,
Strahil Nikolov
В сряда, 14 октомври 2020 г., 12:30:42 Гринуич+3, Jarosław Prokopowski
написа:
Thanks. I will get rid of multipath.
I did not set performance.strict-o-direct spe
biter 1' and trade storage for live migration.
Best Regards,
Strahil Nikolov
В сряда, 14 октомври 2020 г., 12:42:44 Гринуич+3, Gilboa Davara
написа:
Hello all,
I'm thinking about converting a couple of old dual Xeon V2
workstations into (yet another) oVirt setup.
However, the us
Hi,
I would start with
https://www.ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_using_the_cockpit_web_interface/
.
It might have some issues as 4.4 is quite fresh and dynamic, but you just need
to ping the community for help over the e-mail.
Best Regards,
Strahil Nikolov
gt;"distributed-replicate"
Nope, As far as I know - only when you have 3 copies of the data ('replica 3'
only).
Best Regards,
Strahil Nikolov
On Wed, Oct 14, 2020 at 7:34 AM C Williams wrote:
> Thanks Strahil !
>
> More questions may follow.
>
> Th
What is the output of:
df -h /rhev/data-center/mnt/glusterSD/server_volume/
gluster volume status volume
gluster volume info volume
In the "df" you should see the new space or otherwise you won't be able to do
anything.
Best Regards,
Strahil Nikolov
В четвъртък, 15 октом
I would go to the UI and identify the hsot with the 'SPM' flag.
Then you should check the vdsm logs on that host (/var/log/vdsm/)
Best Regards,
Strahil Nikolov
В четвъртък, 15 октомври 2020 г., 20:19:57 Гринуич+3, supo...@logicworks.pt
написа:
Hello,
When I Enable Glust
skip the mkfs.xfs part.
https://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/
Best Regards,
Strahil Nikolov
В вторник, 20 октомври 2020 г., 13:36:58 Гринуич+3, harryo...@gmail.com
написа:
Hi,
When I want to use zfs for software raid on my oVirt nodes instead of a
har
I usually run the following (HostedEngine):
[root@engine ~]# su - postgres
-bash-4.2$ source /opt/rh/rh-postgresql10/enable
-bash-4.2$ psql engine
How did you try to access the Engine's DB ?
Best Regards,
Strahil Nikolov
В вторник, 20 октомври 2020 г., 17:00:37 Грин
Have you checked the ovirt_host_network ansible module ?
It got a VLAN example and I guess you can loop over all the VLANs.
Best Regards,
Strahil Nikolov
В сряда, 21 октомври 2020 г., 11:12:53 Гринуич+3, kim.karga...@noroff.no
написа:
Hi all,
We have Ovirt 4.3, with 11 hosts, and
Usually, oVirt uses the 'virt' group of settings.
What are you symptoms ?
Best Regards,
Strahil Nikolov
В сряда, 21 октомври 2020 г., 16:44:50 Гринуич+3, supo...@logicworks.pt
написа:
Hello,
Can anyone help me in how can I improve the performance of glusterfs to work
is quite important and missed.
Best Regards,
Strahil Nikolov
В сряда, 21 октомври 2020 г., 22:35:21 Гринуич+3, Alex McWhirter
написа:
In my experience, the ovirt optimized defaults are fairly sane. I may change a
few things like enabling read ahead or increasing the shard size, but
Hi Didi,
thanks for the info - I learned it the hard way (trial & error) and so far it
was working.
Do we have an entry about that topic in the documentation ?
Best Regards,
Strahil Nikolov
В четвъртък, 22 октомври 2020 г., 08:27:08 Гринуич+3, Yedidyah Bar David
написа:
On
I might be wrong, but I think that the SAN LUN is used as a PV and then each
disk is a LV from the Host Perspective.
Of course , I could be wrong and someone can correct me. All my oVirt
experience is based on HCI (Gluster + oVirt).
Best Regards,
Strahil Nikolov
В четвъртък, 22 октомври
that is separate
from test :)
Best Regards,
Strahil Nikolov
В четвъртък, 22 октомври 2020 г., 14:00:52 Гринуич+3, supo...@logicworks.pt
написа:
Hello,
For example, a window machine runs to slow, usually the disk is allways in 100%
The group virt settings is this?:
performance.quick
Most probably , but I have no clue.
You can set the host into maintenance and then activate it ,so the volume get's
mounted properly.
Best Regards,
Strahil Nikolov
В петък, 23 октомври 2020 г., 03:16:42 Гринуич+3, Simon Scott
написа:
Hi Strahil,
All networking configs
Can you try to set the destination host into maintenance and then 'reinstall'
from the web UI drop down ?
Best Regards,
Strahil Nikolov
В петък, 23 октомври 2020 г., 18:00:07 Гринуич+3, Anton Louw via Users
написа:
Apologies, I should also add that the destination
Hm... interesting case.
Have you tried to set it into maintenance ? Setting a domain to maintenance
forces oVirt to pick another domain for master.
Best Regards,
Strahil Nikolov
В петък, 23 октомври 2020 г., 19:34:19 Гринуич+3, supo...@logicworks.pt
написа:
When data (Master) is
r.choose-local to 'yes'
- Any errors and warnings in the gluster logs ?
Best Regards,
Strahil Nikolov
В четвъртък, 22 октомври 2020 г., 13:59:04 Гринуич+3,
написа:
Hello,
For example, a window machine runs to slow, usually the disk is allways in 100%
The group vir
calculate the max MTU.
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org
olume info
Best Regards,
Strahil Nikolov
В четвъртък, 15 октомври 2020 г., 16:55:27 Гринуич+3,
написа:
Hello,
I just add a second brick to the volume. Now I have 10% free, but still cannot
delete the disk. Still the same message:
VDSM command DeleteImageGroupVDS failed: Could not
I found this in the SPAM folder ... maybe it's not relevant any more.
My guess is that you updated chrome recently and they changed something :)
In my case (openSUSE Leap 15) , it was just an ad-blocker , but I guess your
chrome version could be newer.
Best Regards,
Strahil Nikolov
Hello Gobinda,
I know that gluster can easily convert distributed volume to replica volume, so
why it is not possible to first convert to replica and then add the nodes as
HCI ?
Best Regards,
Strahil Nikolov
В вторник, 27 октомври 2020 г., 08:20:56 Гринуич+2, Gobinda Das
написа
kills the node
ungracefully in order to safely start HA VMs on another node.
Best Regards,
Strahil Nikolov
В вторник, 27 октомври 2020 г., 10:27:01 Гринуич+2, lifuqi...@sunyainfo.com
написа:
Hi everyone:
Description of problem:
When exec "reboot" or "shutdown -h
start eating the last space you got left ... so be quick :)
P.S.2: I hope you know that the only supported volume types are
'distributed-replicated' and 'replicated' :)
Best Regards,
Strahil Nikolov
В вторник, 27 октомври 2020 г., 11:02:10 Гринуич+2, supo...@logicwork
Are the VM's disks located on the distributed Volume ?
Best Regards,
Strahil Nikolov
В вторник, 27 октомври 2020 г., 17:21:49 Гринуич+2, supo...@logicworks.pt
написа:
The engine version is 4.3.9.4-1.el7
I have 2 simple glusterfs, a volume with one brick gfs1 with an old versi
Why don't you use the devices of the 2 bricks in a single striped LV or raid0
md device ?
Distributed volumes spread the files among the bricks and the performance is
limited to the brick's device speed.
Best Regards,
Strahil Nikolov
В вторник, 27 октомври 2020 г., 18:09:47
Nope,
oficially oVirt supports only replica 3 (replica 3 arbiter 1) or replica 1
(which actually is a single brick distributed ) volumes.
If you have issues related to the Gluster Volume - liek this case , the
community support will be "best effort".
Best Regards,
Strahil Nikol
DC.
Best Regards,
Strahil Nikolov
В вторник, 27 октомври 2020 г., 19:20:31 Гринуич+2, supo...@logicworks.pt
написа:
So, in oVirt if I want a single storage, without high availability, what is the
best solution?
Can I use gluster replica 1 for a single storage?
by the way, cluster.min-fr
uot; or the command 'gluster volume set group virt' in order to
implement the optimal settings for Virtualization.Of course , some tunings can
be made like how many I/O threads to be used and some caching - but even the
defaults are OK.
Have you checked anything from the list I send
/iSCSI
Best Regards,
Strahil Nikolov
В вторник, 27 октомври 2020 г., 21:53:05 Гринуич+2, supo...@logicworks.pt
написа:
No, is not a replica gluster, is just one brick one volume, one single server
storage.
This is what I get:
# gluster volume set data group virt
volume set: failed
You can change it via UI -> Hosts -> select new SPM host -> Management ->
Select as SPM
Best Regards,
Strahil Nikolov
В сряда, 28 октомври 2020 г., 19:46:14 Гринуич+2,
написа:
I think I have a problem in a Nic of one host. This host is the SPM
That's probably w
an SSD and setting a higher percentage of inodes (option 'maxpct=' of mkfs.xfs).
Best Regards,
Strahil Nikolov
В сряда, 28 октомври 2020 г., 00:33:22 Гринуич+2, marcel d'heureuse
написа:
Hi Strahil,
where can I find some documents for the conversion to replica? works t
- the maintenance is
cancelled.Next it will set the host into maintenance and most probably (not
sure about this one) the engine will assign a new host as SPM.
Best Regards,
Strahil Nikolov
В сряда, 28 октомври 2020 г., 05:04:44 Гринуич+2, lifuqi...@sunyainfo.com
написа:
Hi, St
Erm... noone ?
Best Regards,
Strahil Nikolov
В вторник, 27 октомври 2020 г., 02:51:00 Гринуич+2, Strahil Nikolov via Users
написа:
Hello All,
I would like to learn more about OVN and especially the maximum MTU that I can
use in my environment.
Current Setup 4.3.10
Network was
in 4.3.10's UI it shows 1500 :)
В петък, 30 октомври 2020 г., 13:25:05 Гринуич+2, Dominik Holler
написа:
On Thu, Oct 29, 2020 at 9:36 PM Alex K wrote:
>
>
> On Tue, Oct 27, 2020, 02:49 Strahil Nikolov via Users wrote:
>> Hello All,
>>
>> I wo
Any hint for the location of "Automatic Synchronization" in UI ?
Best Regards,
Strahil Nikolov
В петък, 30 октомври 2020 г., 20:16:13 Гринуич+2, Dominik Holler
написа:
On Fri, Oct 30, 2020 at 7:03 PM Strahil Nikolov wrote:
> in 4.3.10's UI it shows 1500 :)
>
If you mean "Administration" -> "Providers" -> "Ovirt-provider-ovn" -> it is
enabled.
Best Regards,
Strahil Nikolov
В петък, 30 октомври 2020 г., 21:10:02 Гринуич+2, Strahil Nikolov
написа:
Any hint for the location of "Automatic Syn
The only one I know is RH318, but it is a paid one.
Best Regards,
Strahil Nikolov
В събота, 31 октомври 2020 г., 02:03:59 Гринуич+2, i...@worldhostess.com
написа:
Can someone recommend a training video of some kind of step by step document to
do the installation and administration
Check if qemu-guest-agent(s) is availabile and use that instead.
Best Regards,
Strahil Nikolov
В събота, 31 октомври 2020 г., 22:04:46 Гринуич+2,
написа:
What is the best way to install ovirt guest on Ubuntu 16.04.6?
What I did:
# apt-get install ovirt-guest-agent
I changed value
Where is that option ?
Best Regards,
Strahil Nikolov
В неделя, 1 ноември 2020 г., 08:56:44 Гринуич+2, Joris DEDIEU
написа:
Hi list,
I forgot to check "Discard after Delete" when creating a new volume. Is there a
way (other than to empty the volume) to reclaim free blocks.
701 - 800 of 1587 matches
Mail list logo