Usually libvirt's log might provide hints (yet , no clues) of any issues.
For example:
/var/log/libvirt/qemu/.log
Anything changed recently (maybe oVirt version was increased) ?
Best Regards,
Strahil Nikolov
В понеделник, 21 септември 2020 г., 23:28:13 Гринуич+3, Vinícius Ferrão
н
Just select the volume and press "start" . It will automatically mark "force
start" and will fix itself.
Best Regards,
Strahil Nikolov
В понеделник, 21 септември 2020 г., 20:53:15 Гринуич+3, Jeremey Wise
написа:
oVirt engine shows one of the gluster servers
ervers with
disks on them), but I guess that is not an option - right ?
Best Regards,
Strahil Nikolov
В понеделник, 21 септември 2020 г., 15:58:01 Гринуич+3, supo...@logicworks.pt
написа:
Hello,
I'm running oVirt Version 4.3.4.3-1.el7.
I have a small GlusterFS Domain st
For some OS versions , the oVirt's behavior is accurate , but for other
versions it's not accurate.
I think that it is more accurate to say that oVirt improperly calculates memory
for SLES 15/openSUSE 15.
I would open a bug at bugzilla.redhat.com .
Best Regards,
Strahil Nikol
rage domains I would have
to import the VM the first time , just to delete it and import it again - so I
can get my VM disks from the storage...
Best Regards,
Strahil Nikolov
В понеделник, 21 септември 2020 г., 11:47:04 Гринуич+3, Eyal Shenitzky
написа:
Hi Stranhil,
Maybe those V
Have you tried to upload your qcow2 disks via the UI ?
Maybe you can create a blank VM (same size of disks) and then replacing the
disk with your qcow2 from KVM (works only of file-based storages like
Gluster/NFS).
Best Regards,
Strahil Nikolov
В понеделник, 21 септември 2020 г., 09:12
Why is your NVME under multipath ? That doesn't make sense at all .
I have modified my multipath.conf to block all local disks . Also ,don't forget
the '# VDSM PRIVATE' line somewhere in the top of the file.
Best Regards,
Strahil Nikolov
В понеделник, 21 септември 2020
What type of disks are you using ? Any change you use thin disks ?
Best Regards,
Strahil Nikolov
В понеделник, 21 септември 2020 г., 07:20:23 Гринуич+3, Vinícius Ferrão via
Users написа:
Hi, sorry to bump the thread.
But I still with this issue on the VM. This crashes are still
Can you put 1 host in maintenance and use the "Installation" -> "Reinstall" and
enable the HE deployment from one of the tabs ?
Best Regards,
Strahil Nikolov
В понеделник, 21 септември 2020 г., 06:38:06 Гринуич+3, ddqlo
написа:
so strange! After I set glob
That's quite strange.
Any errors/clues in the Engine's logs ?
Best Regards,
Strahil Nikolov
В понеделник, 21 септември 2020 г., 05:58:35 Гринуич+3, ddqlo
написа:
so strange! After I set global maintenance, powered off and started H The cpu
of HE became 'Westmere&
source /opt/rh/rh-postgresql10/enable
psql engine
List the tables:
\dt
Then you have to search where the status of the VM is.
Best Regards,
Strahil Nikolov
В неделя, 20 септември 2020 г., 11:59:53 Гринуич+3, Gilboa Davara
написа:
On Sat, Sep 19, 2020 at 5:07 PM Strahil Nikolov
In Some OS versions it should not be considered a bug.
For RH , you can address https://access.redhat.com/solutions/406773 for more
details. (For access , you can use a free subscription from
developers.redhat.com).
Best Regards,
Strahil Nikolov
В четвъртък, 17 септември 2020 г., 16:00
Are you really sure it is a NetworkManager problem ?
This sounds more like a switch MAC table gone wild.
I would recommend you to get 'nmcli con show ' & 'ip route' via a
script during the situation is ongoing.
Best Regards,
Strahil Nikolov
В петък, 18 сеп
You have rebooted the host where the VM was previously running , right ?
If oVirt doesn't detect that the host was rebooted , you can mark it as such :
UI -> Hosts -> select the Host -> the 3 dots -> "Confirm 'Host has been
rebooted'"
Best Regards,
Strahi
;) ?
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/c
to migrate it and if it succeeds - then you
should make it permanent.
Best Regards,
Strahil Nikolov
В петък, 18 септември 2020 г., 04:40:39 Гринуич+3, ddqlo
написа:
HE:
HostedEngine
b4e805ff-556d-42bd-a6df-02f5902fd01c
http://ovirt.org/vm/tune/1.0";
xmlns:ovirt
:
..
Haswell-noTSX
..
在 2020-09-17 03:03:24,"Strahil Nikolov" 写道:
>Can you verify the HostedEngine's CPU ?
>
>1. ssh to the host hosting the HE
>2. alias virsh='virsh -c
>qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf'
>3. v
the hosts:
..
Westmere
..
others vms which can be migrated:
..
Haswell-noTSX
..
在 2020-09-17 03:03:24,"Strahil Nikolov" 写道:
>Can you verify the HostedEngine's CPU ?
>
>1. ssh to the host hosting the HE
>2. alias virsh='virsh -c
>qemu:///syste
efore that like:
- name: Debug server_cpu_dict
debug:
var: server_cpu_dict
Best Regards,
Strahil Nikolov
В четвъртък, 17 септември 2020 г., 00:30:57 Гринуич+3, Michael Blanton
написа:
In my previous reply:
>> Ansible task reports them as Xeon 5130.
>> According to Intel Ark th
You didn't mention your CPU type.
Best Regards,
Strahil Nikolov
В сряда, 16 септември 2020 г., 20:44:23 Гринуич+3, Michael Blanton
написа:
Wondering if there are any suggestions here before I wipe these nodes
and go back to another Hypervisor.
On 9/14/2020 12:59 PM, Mi
hen you can power it up and it should be good to go.
Best Regards,
Strahil Nikolov
В сряда, 16 септември 2020 г., 17:29:10 Гринуич+3, Arman Khalatyan
написа:
ok will try on our env with passthrough, could you please send how you
passthrough the cpu? simply over the ovirt gui?
What is your VM's OS type ?
There is some differences per OS version ->
https://www.redhat.com/sysadmin/dissecting-free-command
Best Regards,
Strahil Nikolov
В сряда, 16 септември 2020 г., 11:13:51 Гринуич+3, KISHOR K
написа:
Hi,
Memory field/column for few of VMs in o
show
the Hosts' .
Best Regards,
Strahil Nikolov
В сряда, 16 септември 2020 г., 10:16:08 Гринуич+3, ddqlo
написа:
My gateway was not pingable. I have fixed this problem and now both nodes have
a score(3400).
Yet, hosted engine could not be migrated. Same log in en
What happens if you create another VM and attach the disks to it ?
Does it boot properly ?
Best Regards,
Strahil Nikolov
В сряда, 16 септември 2020 г., 02:19:26 Гринуич+3, Facundo Garat
написа:
Hi all,
I'm having some issues with one VM. The VM won't start and it
What
about the Vmware ESX host - does it have the same CPU ?
Best Regards,
Strahil Nikolov
В сряда, 16 септември 2020 г., 01:58:30 Гринуич+3, Rav Ya
написа:
Hi Arman,
Intel(R) Xeon(R) Gold 6126 CPU @ 2.60GHz
The VM is configured for host CPU pass through and pinned to 6 CPUs
First validate that the issue happens in:
- incognito mode
- another browser
- another OS
- disabled plugins/stuff in browser
Do you have a proxy ?
Best Regards,
Strahil Nikolov
В вторник, 15 септември 2020 г., 12:45:33 Гринуич+3, i...@worldhostess.com
написа:
It seems that the
Is the engine and the Hypervisour in sync (ntp/chrony working and no drift)?
Best Regards,
Strahil Nikolov
В вторник, 15 септември 2020 г., 11:46:44 Гринуич+3, momokch--- via Users
написа:
hello everyone,
I apologize for asking what is probably a very basic question.
when i login
Both nodes have a lower than the usual score (should be 3400 ).
Based on the score you are probably suffering from gateway-score-penalty [1][2].
Check if your gateway is pingable.
Best Regards,
Strahil Nikolov
1 - https://www.ovirt.org/images/Hosted-Engine-4.3-deep-dive.pdf(page 8)
2 - /etc
As I mentioned in the Gluster's slack, start with providing the output of some
cli commands:
gluster pool list
gluster peer status
gluster volume list
gluster volume status
Best Regards,
Strahil Nikolov
В понеделник, 14 септември 2020 г., 16:24:04 Гринуич+3, tho...@hoberg.net
н
Why don't you use 'host-passthrough' cpu type ?
Best Regards,
Strahil Nikolov
В неделя, 13 септември 2020 г., 20:31:44 Гринуич+3, wodel youchi
написа:
Hi,
I've been using my core i5 6500 (skylake-client) for some time now to test
oVirt on my machine.
However t
I would prefer entries in /dev/disk/by-id.
Have you tried not to specify the "/dev/" , like 'mapper/XXYYY' ?
Best Regards,
Strahil Nikolov
В неделя, 13 септември 2020 г., 08:56:30 Гринуич+3, Jeremey Wise
написа:
Deployment on three node cluster using oVirt
Most probably you got libvirtd.service running (if the output is from a VM).
Just disable it via 'systemctl disable --now libvirtd.service' and to verify
that after reboot everything will be fine -> reboot the VM.
Best Regards,
Strahil Nikolov
В петък, 11 септември 2020
Failed to resolve is pretty obvious:
- typo in the NFS host name
- DNS issue
You can test by adding an entry for the NFS host in /etc/hosts.
Best Regards,
Strahil Nikolov
В четвъртък, 10 септември 2020 г., 19:34:33 Гринуич+3,
написа:
MainProcess|jsonrpc/4::DEBUG::2020-09-10
18:14
Ssh to the host that failed to mount and then:
mount -t nfs :/ /mnt
To verify the mount:
findmnt /mnt
Usually it is recommended to set the NFS with anonguid and anonuid to '36' and
also use 'all_squash'. This will prevent a lot of issues.
Best Regards,
Strahil Nikolov
man 8 mount.glusterfs or online ->
https://www.systutorials.com/docs/linux/man/8-mount.glusterfs/
Best Regards,
Strahil Nikolov
В вторник, 8 септември 2020 г., 22:34:16 Гринуич+3, Holger Petrick
написа:
Strahil,
Thanks a lot. You have a documentation about this settings?
Tha
It would be nice.
Initially when I was setting my lab I wanted to boot from USB and be able to
host Vms properly but due to speed limitations - it failed to work normally.
Maybe in-memory mode could be nice where booting from USB is possible.
Best Regards,
Strahil Nikolov
В вторник, 8
"mount_options": null,
> "path": "/data/nfs",
>
"version": "auto"
Have you tried to mount it manually ?
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
I think that you can try to set one of the HE hosts into maintenance and then
use UI to 'reinstall'. Don't forget to mark the host as a HE host also (some
dropdown in the UI wizard).
Best Regards,
Strahil Nikolov
В вторник, 8 септември 2020 г., 10:24:00 Гринуич+3, Yed
You can extend the gluster volume with your extra 3 nodes.
Keep in mind that it is good to have
'backup-volfile-servers=node2:node3:node4:node5:node6' in order to allow FUSE
to overcome failure of node1.
Best Regards,
Strahil Nikolov
В понеделник, 7 септември 2020 г., 20:48:55
You can use the following:
vim ~/.bashrc
alias virsh='virsh -c
qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf'
source ~/.bashrc
#Show host capabilities
virsh capabilities
Now repeat on the other nodes. Compare the CPU from the 3 outputs.
Best Regards,
Strah
What is the output of 'hosted-engine --vm-status' on the node where the
HostedEngine is running ?
Best Regards,
Strahil Nikolov
В понеделник, 7 септември 2020 г., 03:53:13 Гринуич+3, ddqlo
написа:
I could not find any logs because the migration button is disabled in t
-no1.a
riadne-t.local-gluster_bricks-data-data.pid lock failed [Resource temporarily
unavailable]
That doesn't make sense.
Can you share the logs in a separate thread at gluster-us...@gluster.org ?
Best Regards,
Strahil Nikolov
В петък, 4 септември 2020 г., 19:55:24 Гринуич+3, s
Is this a HCI setup ?
If yes, check gluster status (I prefer cli but is also valid in the UI).
gluster pool list
gluster volume status
gluster volume heal info summary
Best Regards,
Strahil Nikolov
В петък, 4 септември 2020 г., 00:38:13 Гринуич+3, Gillingham, Eric J (US 393D)
via Users
What about the gluster logs ?
Best Regards,
Strahil Nikolov
В четвъртък, 3 септември 2020 г., 20:57:05 Гринуич+3, souvaliotima...@mail.com
написа:
Thank you very much for your reply.
I checked the NTP and realized the service wasn't working properly on two of
the three nodes
ils: https://access.redhat.com/solutions/4365931
Best Regards,
Strahil Nikolov
В четвъртък, 3 септември 2020 г., 13:42:19 Гринуич+3, Sverker Abrahamsson
написа:
Hi Ales,
this is a CentOS 8 so my impression was that you always have NetworkManager
then? At least my attempt to remove it failed
If you have snapshots like A -> B -> C and you restore A , it is normal to
loose B & C. After all when you restore A , B & C never happened. Otherwise ,
ovirt should clone the snapshots in separate images and that is not the
requested, right ?
Best Regards,
Strahil Nikolov
What is you switch brand & model ?
Maybe someone more experienced in networking can help.
Best Regards,
Strahil Nikolov
В сряда, 2 септември 2020 г., 23:39:57 Гринуич+3, Sverker Abrahamsson via Users
написа:
Well, unforturnatly I don't have a choise since it is out of m
nually is error-prone ,so you should consider
'https://github.com/gluster/gdeploy' unless you are an experienced Gluster User.
Best Regards,
Strahil Nikolov
В сряда, 2 септември 2020 г., 21:55:42 Гринуич+3, Michael Thomas
написа:
Is there a CLI for setting up a hyperconverged
Switchports can either be tagged or untagged.
I'm not sure that your setup is supported at all.
Best Regards,
Strahil Nikolov
В сряда, 2 септември 2020 г., 20:41:57 Гринуич+3, Sverker Abrahamsson via Users
написа:
Pretty formatting the "desired state" it seems tha
Have you tried enabling the gluster repos from the CentOS Storage SIG ?
I think it was something like : yum install centos-release-gluster7
Best Regards,
Strahil Nikolov
В сряда, 2 септември 2020 г., 06:05:03 Гринуич+3, Vinícius Ferrão via Users
написа:
Hello,
Anyone had success
Have you checked elrepo ?
Best Regards,
Strahil Nikolov
В вторник, 1 септември 2020 г., 08:27:10 Гринуич+3, Remulo
написа:
Hello
I have some blades with 10GE interface that have / need the Emulex be2net
driver, however it is no longer available on Redhat8 / Centos8. Is there any
Are you reusing a gluster volume or you have created a fresh one ?
Best Regards,
Strahil Nikolov
В вторник, 1 септември 2020 г., 02:58:19 Гринуич+3, tho...@hoberg.net
написа:
I've just tried to verify what you said here.
As a base line I started with the 1nHCI Gluster setup.
bvirt logs on the destination.
Best Regards,
Strahil Nikolov
В понеделник, 31 август 2020 г., 10:47:22 Гринуич+3, ddqlo
написа:
Thanks! The scores of all nodes are not '0'. I find that someone has already
asked a question like this. It seems that this feature has been disable
It's more focused towards the enterprise.
You can reconfigure this by ssh-ing to the HE and updating your postfix
configuration.
Best Regards,
Strahil Nikolov
В събота, 29 август 2020 г., 17:55:53 Гринуич+3, David White via Users
написа:
I finally got oVirt node installed
FS/anything network based
- VM migration network that will allow you to migrate the VM from one Host to
another
- ovirtmgmt which is used by the engine to reach to all Hosts and manage them
Another type of network could be when you use IPMI-based fencing which is done
from a Host to
Yes it is.
You can still install and setup Gluster all by yourself (lots of manual steps)
and then use that as a storage. Yet, replica 1 and replica 3 (or replica 3
arbiter 1) are the only supported in Ovirt.
Best Regards,
Strahil Nikolov
В четвъртък, 27 август 2020 г., 18:28:55 Гринуич
Have you checked under a shell the output of 'hosted-engine --vm-status' .
Check the Score of the hosts. Maybe there is a node with score of '0' ?
Best Regards,
Strahil Nikolov
В вторник, 25 август 2020 г., 13:46:18 Гринуич+3, 董青龙 написа:
Hi all,
I h
Libgfapi bypasses the context switching from User space to kernel to user space
(FUSE) , so it gets better performance.
I can't find the previous communication ,so can you share your volume settings
again ?
Best Regards,
Strahil Nikolov
В неделя, 23 август 2020 г., 21:45:22 Грин
llow oVirt recommendation, but if I
can avoid needless "shaking" of the platform, I would rather take that route.
-
kind regards/met vriendelijke groeten
Marko Vrgotic
ActiveVideo
On 20/08/2020, 13:39, "Strahil Nikolov" wrote:
***CAUTION: This email originated from o
Actually,
you don't need to find where the SHE is.
Once you setup the host in maintenance, all VMs including the SHE will be
evacuated out of there.
Best Regards,
Strahil Nikolov
На 20 август 2020 г. 10:50:14 GMT+03:00, "Vrgotic, Marko"
написа:
>Dear oVirt,
>
>For
Are you using fully allocated VM disks ?
На 20 август 2020 г. 0:53:40 GMT+03:00, info--- via Users
написа:
>Additional Info: I've running Nextcloud on a VM.
>- to sync (download) 1 file to the client with 300 MB is fast
>- to sync (download) 300 files to the client with in total 1 MB is very
>sl
На 19 август 2020 г. 22:39:22 GMT+03:00, info--- via Users
написа:
>Thank you for the quick reply.
>
>- I/O scheduler hosts -> changed
>echo noop > /sys/block/sdb/queue/scheduler
>echo noop > /sys/block/sdc/queue/scheduler
On reboot it will be reverted. Test this way and if you notice improvem
you are OK with the limitations
that it imposes -> you can swith to libgfapi instead of FUSE (requires VM power
off and power on to switch)
Best Regards,
Strahil Nikolov
На 19 август 2020 г. 21:26:44 GMT+03:00, info--- via Users
написа:
>Hello,
>
>I'm running a home se
Usually the cockpit deployment is generating an ansible play which you can edit
before running it.
Best Regards,
Strahil Nikolov
На 18 август 2020 г. 20:57:36 GMT+03:00, "Raphael Höser"
написа:
>I don't use ansible for the deployment of the hosted engine right now,
>but
d the IP of the local engine vm
- ssh to that engine
- add the necessary lines for dnf
Best Regards,
Strahil Nikolov
На 18 август 2020 г. 10:28:46 GMT+03:00, "Raphael Höser"
написа:
>Hi all,
>
>I'm currently installing OVirt 4.4 on CentOS 8 in an hosted engine
>set
te, vm consoles , and anything that comes
to your mind.
Without proper testing or without same packages for test and prod ->
everything is useless.
P.S.: Developers are using CentOS stream for development, so if you are
wandering - CentOS8 vs Stream , most probably Stream will be bett
oVirt 4.4.X is using Gluster v7
Best Regards,
Strahil Nikolov
На 17 август 2020 г. 19:15:54 GMT+03:00, supo...@logicworks.pt написа:
>Hello,
>
>What is the compatibility gluster version to work with oVirt 4.4.1 ?
>
>Thanks
___
Use
What is the output of 'hosted-engine --vm-status' on all nodes that are
supposed to host the HostedEngine ?
Usually issues there can be debugged , when you check the ovirt-ha-broker and
ovirt-ha-agent logs (in /var/log on the affected Host).
Best Regards,
Strahil Nikolov
На 15 а
remove the ISO domain. Have you tried to set
the domain in maintenance first and then to remove it ?
Check your VMs , if any has an ISO attached to it.
Also, you can upload an ISO to a data domain and use that to install VMs.
Best Regards,
Strahil Nikolov
На 13 август 2020 г. 22:21:10
Ovirt switched to qemu-guest-agent which is also used in pure KVM environments.
Best Regards,
Strahil Nikolov
В четвъртък, 13 август 2020 г., 15:50:14 Гринуич+3, carl langlois
написа:
Hi,
This may not be the right place to ask but any of you is using Ubuntu 20.04
guest. I have noticed
Hi Olaf,
yes but mark it as '[RFE]' in the name of the bug.
Best Regards,
Strahil Nikolov
На 12 август 2020 г. 12:41:55 GMT+03:00, olaf.buitel...@gmail.com написа:
>Hi Strahil,
>
>It's not really clear how i can pull requests to the oVirt repo.
>I've found this
- live migrate
If you have gluster acl issues , those will fail, otheewise it's something
else.
I hit the bug when upgrading from 6.5 to 6.6, so if it is gluster issue - you
can downgrade to v6.5 or upgrade to v7.0 ( but not 7.1+)
Best Regards,
Strahil Nikolov
На 12 август 2020 г. 6:45:3
r all nodes.
As you got a test gluster cluster, you can test it there.
Best Regards,
Strahil Nikolov
На 12 август 2020 г. 6:27:56 GMT+03:00, tho...@hoberg.net написа:
>Thanks for putting in the effort!
>
>I learned a lot of new things.
>I also learned that I need to learn a few more now.
&
- you can downgrade all
gluster packages (but you will need to restart the gluster brick processes).
Best Regards,
Strahil Nikolov
На 12 август 2020 г. 2:23:03 GMT+03:00, tho...@hoberg.net написа:
>While trying to diagnose an issue with a set of VMs that get stopped
>for I/O problems at s
Hey Olaf,
you can add the CentOS Storage SIG repo and patch.
Best Regards,
Strahil Nikolov
На 11 август 2020 г. 21:27:23 GMT+03:00, Olaf Buitelaar
написа:
>Hi Strahil,
>
>Thanks for confirming v7 is working fine with oVirt 4.3, it being from
>you,
>gives quite some faith.
>I
Have you tried to install the applance rpm manually ?
Most problem you got repo issues .
Best Regards,
Strahil Nikolov
На 11 август 2020 г. 17:33:31 GMT+03:00, hkexdong--- via Users
написа:
>oVirt Node version is 4.4.1. This version can successful deploy before.
>But after I compile th
I have been using v7 for quite some time.
Best Regards,
Strahil Nikolov
На 11 август 2020 г. 15:26:51 GMT+03:00, olaf.buitel...@gmail.com написа:
>Dear oVirt users,
>
>any news on the gluster support side on oVirt 4.3. With 6.10 being
>possibly the latest release, it would be nice
You can access it after subscribing at developers.redhat.com .
The article claims that you have to disable HA on the blank template, yet
this doesn't sound me familiar.
Best Regards,
Strahil Nikolov
На 10 август 2020 г. 17:55:05 GMT+03:00, d...@sekretev.ru написа:
>Hi!
>ho
Thanks Nir, flr the detailed explanation.
Can you tell me with export/import data domains, what happens to VMs with
snapshots.
Recently it was mentioned that snapahots are not visible after such migration.
Best Regards,
Strahil Nikolov
На 10 август 2020 г. 0:00:36 GMT+03:00, Nir Soffer написа
path/to/mounted/qcow2/disk/largefile
I doubt it's different in oVirt as backend is the same (KVM/qemu/raw or qcow2).
Best Regards,
Strahil Nikolov
На 9 август 2020 г. 10:15:26 GMT+03:00, Eyal Shenitzky
написа:
>Hi Jorge,
>
>Currently, there is no mechanism for doing this operatio
Not when both client and server is the same node and you use
"localhost:/nfs-share"
Best Regards,
Strahil Nikolov
В събота, 8 август 2020 г., 05:56:10 Гринуич+3, Lao Dh
написа:
NFS utilize the network adapter, am I right? The LAN port max speed on the
storage is ju
Do you have the option for POSIX compliant FS ?
If not, I guess the simplest way is to setup NFS export that to be used for
the engine.
Best Regards,
Strahil Nikolov
На 7 август 2020 г. 15:59:55 GMT+03:00, Lao Dh via Users
написа:
>Hello Strahil,I follow the guide in "Installing oV
Are you using the single-node wizard ?
Best Regards,
Strahil Nikolov
На 7 август 2020 г. 11:34:52 GMT+03:00, hkexdong--- via Users
написа:
>I've an external RAID subsystem connect to the host by SAS cable
>(SFF-8644).
>I follow the instructions of the RAID card manufactur
Can you fheck for errors on the affected host. Most probably you need the vdsm
logs.
Best Regards,
Strahil Nikolov
На 6 август 2020 г. 7:40:23 GMT+03:00, Nardus Geldenhuys
написа:
>Hi Strahil
>
>Hope you are well. I get the following error when I tried to confirm
>reboot:
&g
When I install windows, I click on "change cd" and I use the ovirt tools cd to
find the virtio net and disk drivers and again I swap the dvd.
Of course, you can repeat the process when your windows is ready as the 1-click
installer is quite handy.
Best Regards,
Strahil Nikolov
Н
After rebooting the node, have you "marked" it that it was rebooted ?
Best Regards,
Strahil Nikolov
На 5 август 2020 г. 21:29:04 GMT+03:00, Nardus Geldenhuys
написа:
>Hi oVirt land
>
>Hope you are well. Got a bit of an issue, actually a big issue. We had
>some
>sort
This 'RaidExpert2' sounds like FakeRaid... Run away as soon as possible.
Either use HW Raid with a Raid Controller or Software Raid (mdadm/lvm).
Best Regards,
Strahil Nikolov
На 5 август 2020 г. 8:11:29 GMT+03:00, Lao Dh написа:
>Thank you Strahil and Gianluca,I am using oVirt Node
oVirt should merge the disks and release any disks space used.
The best way is to find the VM disks and then identify the disk chain (via
qemu-img) and the find the size of the base disk + all the snapshots.
Best Regards,
Strahil Nikolov
На 4 август 2020 г. 16:48:23 GMT+03:00, jorgevisent
Are you using the oVirt node ?
If you use custom setup, you need to have the same partitions/LVs that are
used by default .
Can you give a screenshot of the installer?
Best Regards,
Strahil Nikolov
На 3 август 2020 г. 16:28:02 GMT+03:00, Gianluca Cecchi
написа:
>On Mon, Aug 3, 2
disks remain, while the rest are merged into a single file.
Restoring a snapshot is simplest - everything after that snapshot is deleted
and the vm1 will use the snapshot disk till you delete (which will merge base
disk with snapshot disk) that snapshot.
Best Regards,
Strahil Nikolov
На 2 август 2
hot potato' .
Yet, I agree that QA tests should have catched it in the first place, but
here comes the community part - to assist the devs with finding the test
cases we all need.
Best Regards,
Strahil Nikolov
На 1 август 2020 г. 12:51:37 GMT+03:00, tho...@hoberg.net написа:
>Unfort
andro,
can you assist with this one ?
Best Regards,
Strahil Nikolov
На 31 юли 2020 г. 10:01:17 GMT+03:00, Alex K написа:
>Has anyone been able to import a storage domain and still have access
>to VM
>snapshots or this might be a missing feature/bug that needs to be
>reported?
&g
Damn, those thick fingers...
На 30 юли 2020 г. 23:12:00 GMT+03:00, Strahil Nikolov
написа:
>I have been using 7.6 (and rewntly migrated to 7.7) on my oVirt 4.3.10
> withkut any issues so far.
>
>Are you sure that it's not oVirt 4.4 specific ?
>
>Best Regards,
&g
I have been using 7.6 (and rewntly migrated to 7.7) on my oVirt 4.3.10
withkut any issues so far.
Are you sure that it's not oVirt 4.4 specific ?
Best Regards,
Strahil Nikolov
На 30 юли 2020 г. 15:03:17 GMT+03:00, shadow emy написа:
>Good that is ok for you now.
>As Gianlu
I've run KVM VMs ontop oVirt Guest. Are you sure that the Nested
Virtualization is your problem ?
Best Regards,
Strahil Nikolov
На 29 юли 2020 г. 23:33:48 GMT+03:00, tho...@hoberg.net написа:
>I tried using nested virtualization, too, a couple of weeks ago.
>
>I was usin
Have you tried ovn-trace to detect your issues ?
I think the following blog is quite good:
https://www.google.com/amp/s/blog.russellbryant.net/2016/11/11/ovn-logical-flows-and-ovn-trace/amp/
Best Regards,
Strahil Nikolov
На 27 юли 2020 г. 15:43:48 GMT+03:00, Konstantinos B
написа:
>Hi
You can suppress them by following
https://access.redhat.com/solutions/3556491
if $msg contains "messsage to suppress" then stop
На 24 юли 2020 г. 18:46:37 GMT+03:00, Dmitry Kharlamov
написа:
>Yes! Happened! Thank you so much!
>Didn't know about this possibility.
>
>Oops, the solution is ve
Hi Jiri,
you are the second person who mentions it. Can you open a bug at
bugzilla.redhat.com about that ?
Best Regards,
Strahil Nikolov
На 24 юли 2020 г. 16:30:02 GMT+03:00, "Jiří Sléžka" написа:
>On 7/24/20 11:36 AM, Jiří Sléžka wrote:
>> On 7/24/20 10:56 A
For the subscription you got a way around -> just subscripe at
developers.redhat.com
Best Regards,
Strahil Nikolov
На 24 юли 2020 г. 17:22:17 GMT+03:00, Dmitry Kharlamov
написа:
>If it does not make it difficult, please tell me at least the general
>direction in which you need to l
pushed against a typical DB
5. Measure performance during point 4 (for example time of execution)
6. Start over
Anything else is a waste of time.
Best Regards,
Strahil Nikolov
На 24 юли 2020 г. 13:26:18 GMT+03:00, Stefan Hajnoczi
написа:
>On Thu, Jul 23, 2020 at 07:25:14AM -0700, Philip Br
901 - 1000 of 1587 matches
Mail list logo