Can someone tell me if this is an expected behaviour , or only my setup is
defective ?Thanks in advance.
Best Regards,Strahil Nikolov
В сряда, 3 юли 2019 г., 13:09:09 ч. Гринуич-4, Strahil Nikolov
написа:
Hello Community,
I have noticed that if I want a VM snapshot with memory - I
Parth,
did you use cockpit to deploy the gluster volume ?
it's interesting why the cockpit allowed the engine's volume to be so small ...
Best Regards,Strahil Nikolov
В петък, 5 юли 2019 г., 10:17:11 ч. Гринуич-4, Simone Tiraboschi
написа:
On Fri, Jul 5, 2019 at 4:12 PM Parth
tps://bugzilla.redhat.com/show_bug.cgi?id=1712592 is
still valid
|
|
| |
1712592 – oVirt 4.3.4 RC1 - cannot attach disk as virtio-scsi during new...
|
|
|
Best Regards,Strahil Nikolov
В четвъртък, 27 юни 2019 г., 4:03:15 ч. Гринуич-4, Sandro Bonazzola
написа:
Il giorno gio
You can try to boot with an older kernel (if there is any).
Best Regards,Strahil Nikolov
В петък, 28 юни 2019 г., 9:26:22 ч. Гринуич-4, Crazy Ayansh
написа:
Hi Team,
Today i rebooted my hosted engine and found it was not getting up. After
connecting through remote viewer i found
If I'm not wrong, this rpm is being downloaded to one of the hosts during
self-hosted engine's deployment.Why would you try to import a second
self-hosted engine ?
Best Regards,Strahil Nikolov
В четвъртък, 11 юли 2019 г., 22:37:56 ч. Гринуич+3,
написа:
Hi,
Can someone tell me how
Can you mount the volume manually at another location ?Also, have you done any
changes to Gluster ?
Please provide "gluster volume info engine" . I have noticed the following in
your logs: option 'parallel-readdir' is not recognized
Best Regards,Strahil Nikolov
В петък, 12 юли 2
I still don't get why you need the OVA .Are you trying to extract and put it
into another virtualization. If this is your intention - better install it as
if it was a standalone engine.
Best Regards,Strahil Nikolov.
В петък, 12 юли 2019 г., 21:48:35 ч. Гринуич+3, Jingjie Jiang
написа
Hi Alexei ,
the qemu-guest-agent is the only available agent for SLES/openSUSE 15 and RHEL
8.Alsno qemu-guest-agent should replace ovirt-guest-agent in the future (as far
as I know).
Best Regards,Strahil Nikolov
В вторник, 9 юли 2019 г., 5:00:51 ч. Гринуич-4, Николаев Алексей
написа
Hi Neil,
for "Could not fetch data needed for VM migrate operation" - there was a bug
and it was fixed.Are you sure you have fully updated ?What procedure did you
use ?
Best Regards,Strahil Nikolov
В вторник, 9 юли 2019 г., 7:26:21 ч. Гринуич-4, Neil
написа:
Hi guys.
I got the same with latest oVirt.What exactly do you want to check ?
best Regards,Strahil Nikolov
В вторник, 9 юли 2019 г., 9:31:55 ч. Гринуич-4, Mark Steele
написа:
oVirt Enginge Version: 3.5.0.1-1.el6 (yesI know it's old)
VM is running Ubuntu 18.04 with ovirt agent installed
The first thing that comes to my mind is to check the network's MTU.By default
it is 1500, and I suppose you can go with MTU 9000Also , check if OS is using
MTU 9000
Best Regards,Strahil Nikolov
В вторник, 9 юли 2019 г., 9:58:47 ч. Гринуич-4, Mark Steele
написа:
It seems that our
issue. What agent are you using in the
VMs (ovirt or qemu) ?
Best Regards,Strahil Nikolov
В вторник, 9 юли 2019 г., 10:09:05 ч. Гринуич-4, Neil
написа:
Hi Strahil,
Thanks for the quick reply.I put the cluster into global maintenance, then
installed the 4.3 repo, then "yum update
Hello Community,
I have noticed that if I want a VM snapshot with memory - I get a warning "The
VM will be paused while saving the memory" .Is there a way to make snapshot
without pausing the VM for the whole duration ?
I have noticed that it doesn't matter if VM has qemu-guest-agent or
Can you wipe the beggining of both SD cards via 'dd' ?I have seen similar
issues on CentOS's anaconda with extraordinary disk setups.
Best Regards,Strahil Nikolov
В сряда, 3 юли 2019 г., 20:34:58 ч. Гринуич+3, rubentrind...@live.com
написа:
Hello there!
I'm trying to install an oVirt
Hi,
in my Case I have managed to move a VM from 1 cluster to another (with a
shutdown ofcourse) and the shared storage is available to both AMD and Intel
nodes.
Best Regards,Strahil Nikolov
В понеделник, 1 юли 2019 г., 9:35:32 ч. Гринуич-4, supo...@logicworks.pt
написа:
Hi,
Thanks
As i'm accessing the VNC from another network , I do use noVNC.For native VNC
or SPICE - I use it only at home (my lab is there), so i don't have to worry
about that.
I think there were a lot of threads in the mailing list.
Best Regards,Strahil Nikolov
В понеделник, 1 юли 2019 г., 9:37:34
Do you really use Cinderlib ?
If so , I think that someone of the DEv should take a look at:[ ERROR ] Failed
to
execute stage 'Environment setup': Cannot connect to ovirt cinderlib
database using existing credentials: ovirt_cinderlib@localhost:5432
Best Regards,Strahil Nikolov
В
).
The other known issues are in the same status.
Best Regards,Strahil Nikolov
В сряда, 3 юли 2019 г., 9:56:29 ч. Гринуич-4, Sandro Bonazzola
написа:
The oVirt Project is pleased to announce the availability of the oVirt 4.3.5
Fourth Release Candidate for testing, as of July 3rd, 2019
entry will force the ovirt VM to reboot and then the system entries
will be detected properly and the system will boot.
Best Regards,Strahil Nikolov
В петък, 5 юли 2019 г., 17:05:22 ч. Гринуич-4, Strahil
написа:
Hello All,
I would like to ask any of you to do a simple test.
Please
I have opened a bug : https://bugzilla.redhat.com/show_bug.cgi?id=1727987 for
this issue.
Best Regards,
Strahil Nikolov
As nobody has replied , I guess it is a new bug and I'm going to report it in
bugzilla.
workaround for anyone who hit this is to modify the grub menu as follows:
FIX
I would recommend you to check the brick logs , then the gluster logs and last
the vdsm log.
At least vdsm should timeout if it can't create the task in a reasonable time
frame ... or maybe not ?
Best Regards,Strahil Nikolov
В сряда, 24 април 2019 г., 19:50:51 ч. Гринуич-4, Steffen Luitz
In which menu do you see it this way ?
Best Regards,Strahil Nikolov
В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero
написа:
Strahil,this is the issue I am seeing now
The is thru the UI when I try to create a new brick.
So my concern is if I modify the filters
Do you have "br-kvm-stor" and "kvm_heart-22" defined for the cluster your VM
is in.
Best Regards,Strahil Nikolov
В сряда, 24 април 2019 г., 16:54:12 ч. Гринуич-4, eshwa...@gmail.com
написа:
When creating a new VM, it looks like I connect it's nic(s) under the
&qu
TA ssd shared between OS and 4 other
bricks
Since I have switched from old HDDs to consumer SSD disks - the engine volume
is not reported by sanlock.service , despite Gluster v52.XX has higher latency.
Best Regards,Strahil Nikolov
В сряда, 24 април 2019 г., 21:25:10 ч. Гринуич-4, Leo David
All my hosts have the same locks, so it seems to be OK.
Best Regards,Strahil Nikolov
В четвъртък, 25 април 2019 г., 8:28:31 ч. Гринуич-4, Adrian Quintero
написа:
under Compute, hosts, select the host that has the locks on /dev/sdb,
/dev/sdc, etc.., select storage devices and in here
-id/pvuuid
/dev/mapper/multipath-uuid
/dev/sdb
Linux will not allow you to work with /dev/sdb , when multipath is locking the
block device.
Best Regards,Strahil Nikolov
В четвъртък, 25 април 2019 г., 8:30:16 ч. Гринуич-4, Adrian Quintero
написа:
under Compute, hosts, select the hos
As far as I know, you need to add the hosts , but I don't think that you can
add the hosts as Gluster-nodes only.
В сряда, 17 април 2019 г., 6:00:39 ч. Гринуич-4, Zryty ADHD
написа:
Hi,
I have a questiion about that. I Install Ovirt 4.3.3 on Rhel 7.6 and want to
import my existing
Try to run a find from a working server(for example node02):
find /rhev/data-center/mnt/glusterSD/node01.infra.solutions.work:_vmstore -exec
stat {} \;
Also, check if all peers see each other.
Best Regards,Strahil Nikolov
В сряда, 24 април 2019 г., 3:27:41 ч. Гринуич-4, Andreas Elvers
d they did not have that option.
Is there any benefit to not use the local brick ?Any issues to reset that
option and use local brick for reading ?
Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email t
the logs yet, but I will keep you updated.
best Regards,Strahil Nikolov
В четвъртък, 11 юли 2019 г., 5:34:52 ч. Гринуич-4, Sandro Bonazzola
написа:
The oVirt Project is pleased to announce the availability of the oVirt 4.3.5
Fifth Release Candidate for testing, as of July 11th, 2019
Have you tried with "Virtual Appliance(OVA)" ?
best Regards,Strahil Nikolov
В вторник, 2 юли 2019 г., 6:17:42 ч. Гринуич-4, Crazy Ayansh
написа:
its asking vcenter ip ? I have a standalone vmware exsi server.
could you please provide the details ?
On Tue, Jul 2, 2019
This sounds like a bug.Would you consider opening a bug on bugzilla.redhat.com
?
Best Regards,Strahil Nikolov
В вторник, 2 юли 2019 г., 3:53:00 ч. Гринуич-4, Mitja Pirih
написа:
On 02. 07. 2019 09:14, Simone Tiraboschi wrote:
On Mon, Jul 1, 2019 at 6:56 PM Mitja Pirih
Hello Community,
did anyone experience disconnects after a minute or 2 (seems random ,but I will
check it out) with error code 1006 ?Can someone with noVNC reproduce that
behaviour ?
As I manage to connect, it seems strange to me to loose connection like that.
The VM was not migrated - so it
What is the host cpu type?
Best >Regards,Strahil Nikolov
В неделя, 1 септември 2019 г., 09:45:03 ч. Гринуич+3, Marcello Gentile
написа:
Install fails with ERROR ansible failed {'status': 'FAILED', 'ansible_type':
'task', 'ansible_task': u'Fail with error descript
When you go to Edit VM -> System -> Advanced Parameters (right hand side) ->
Custom Compatibility Version do you have the option for 4.2 ?If yes, can you
set it to 4.2 and then poweroff + poweron the VM ?
Best Regards,Strahil Nikolov
В четвъртък, 5 септември 2019 г., 16:57:35 ч.
|org.postgresql.util.PSQLException:This connection has
been closed.|1
I'm not sure how fatal are those, as this type of exceptions are logged for a
lot of time - the earliest occurrence I found in the logs is for :
'etlVersion|4.2.4.3' from 2019-01-31 .
So , latest RC seems to be OK.
Best Regards,Strahil Nikolov
В
|DeleteTimeKeepingJob|Default|6|Java
Exception|tJDBCInput_10|org.postgresql.util.PSQLException:This connection has
been closed.|1' exception ?
Best Regards,Strahil Nikolov
В четвъртък, 18 юли 2019 г., 09:39:42 ч. Гринуич+3, Yedidyah Bar David
написа:
On Wed, Jul 17, 2019 at 6:15 PM Strahil Nikolov
,Strahil Nikolov
В четвъртък, 18 юли 2019 г., 11:16:26 ч. Гринуич+3, Christoph Köhler
написа:
Hello,
I try to migrate a disk of a running vm from gluster 3.12.15 to gluster
3.12.15 but it fails. libGfApi set to true by engine-config.
° taking a snapshot first, is working. Then at engine-log
I got 2 NVMes (gluster data bricks) and I don't have any issues.When exactly
this occurs ?
Best Regards,Strahil Nikolov
В четвъртък, 18 юли 2019 г., 12:27:23 ч. Гринуич+3, bossma...@gmail.com
написа:
Hey,
I have problems with running oVirt installer on my H370M-ITX mobo. I have
try to move a less important VM's
disks out of this storage domain to another one.
If it succeeds - then you can evacuate all VMs away before you can start
"breaking" the storage domain.
Best Regards,Strahil Nikolov
В четвъртък, 18 юли 2019 г., 16:59:46 ч. Гринуич+3, Martijn
re a Linux problem , than an oVirt problem.
Best Regards,Strahil Nikolov
В четвъртък, 18 юли 2019 г., 18:51:32 ч. Гринуич+3, Martijn Grendelman
написа:
Hi!
Thanks. Like I wrote, I have metadata backups from /etc/lvm/backup and
-/archive, and I also have the current metadata as it exis
/nvme0n1 bs=4M
count=100 status=progress
Then reboot and try again.
I assume that you do not have anything on it.
Best Regards,Strahil Nikolov
В четвъртък, 18 юли 2019 г., 19:53:57 ч. Гринуич+3, bossma...@gmail.com
написа:
While installer is loading
Usually, when you import the storage domains - the engine will detect the VMs,
but I'm not sure if that will require downtime or notHave you tried to recover
the engine ? What kind of corruption did you experience ?
Best Regards,Strahil Nikolov
В сряда, 24 юли 2019 г., 14:50:55 ч. Гринуич+3
with the same
IP.
In my case, I have a team device with 6 x 1GbE ports.As my setup is
hyperconverged (replica 3 arbiter 1) , I had to use multiple gluster volumes
(and each brick has different port) with 1 VM disk per volume and to stripe on
VM level for better performance.
Best Regards,Strahil
A healthy engine should report:[root@ovirt1 ~]# curl --cacert CA
https://engine.localdomain/ovirt-engine/services/health;echoDB Up!Welcome to
Health Status!
Of course you can use the '-k' switch to verify the situation.
Best Regards,Strahil Nikolov
В сряда, 24 юли 2019 г., 17:43:59 ч
Have you tried to remove your task via the taskcleaner.shYou can find some
details on
https://www.ovirt.org/develop/developer-guide/db-issues/helperutilities.html
I should admit , that I have never used that.
Best Regards,Strahil Nikolov
В петък, 26 юли 2019 г., 09:03:10 ч. Гринуич+3
That's strange...Do you have any kind of proxy there?
Best Regards,Strahil Nikolov
В четвъртък, 25 юли 2019 г., 17:31:26 ч. Гринуич+3, carl langlois
написа:
Thanks Strahil,
When i run the command it get stuck waiting for the response.
About to connect() to ovengine port 443 (#0
the host is in
maintenance - you can proceed with your tasks.
P.S: Don't forget to remove the mainteance mode once you are over.
Best Regards,Strahil Nikolov ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.o
.
Maybe a restart of the HostedEngine/Engine will help...
Best Regards,Strahil Nikolov
В петък, 26 юли 2019 г., 13:28:13 ч. Гринуич+3, Enrico
написа:
Dear all,
I try this:
# /usr/share/ovirt-engine/setup/dbutils/taskcleaner.sh -v -t
fdcf4d1b-82fe-49a6-b233-323ebe568f8e
select
Hi All,
I recently got an issue and due to lack of debug time , I'm still not sure if
it was a loop on the network.Can someone clarify why the ovirtmgmt bridge has
STP disabled ? Any reason behind that ?
Also, what is the proper way to enable STP on those bridges ?
For now , I have set highest
course , you can split your nodes into several
smaller clusters - if you feel the need for.
Best Regards,Strahil Nikolov
В неделя, 30 юни 2019 г., 6:53:28 ч. Гринуич-4, wodel youchi
написа:
Hi,
Here : 3.8 scalling
https://access.redhat.com/documentation/en-us
ifconfig is deprecated and you need additional rpms to get it.In order to find
the necessary rpm:yum whatprovides "*/ifconfig"
Of course, you have to get used with the new "ip" command instead.
Best Regards,Strahil Nikolov
В неделя, 30 юни 2019 г., 1
-
mom.GuestMonitor.Thread - INFO - GuestMonitor-node1 ending
Can someone clarify what exactly does this (from to ) mean ?
Best Regards,Strahil Nikolov
В четвъртък, 13 юни 2019 г., 17:27:01 ч. Гринуич+3, Martin Sivak
написа:
Hi,
iirc the guest agent is not needed anymore as we get almost
I saw the hint in the Cluster's tab and then on each host. I had to put the
host back to maintenance in order to sync it again.
Logs attached .
Best Regards,Strahil Nikolov
В четвъртък, 31 октомври 2019 г., 16:12:53 ч. Гринуич+2, Dominik Holler
написа:
On Thu, Oct 31, 2019 at 10:06
Sadly, I can't check them any more - they are deleted, but I think that I got
only the chipset changed.
Have you tried with another ISO ?
Best Regards,Strahil Nikolov
В четвъртък, 31 октомври 2019 г., 13:54:23 ч. Гринуич+2, Mathieu Simon
написа:
Hi
Am Di., 29. Okt. 2019 um 14:02 Uhr
Best Regards,
Strahil Nikolov
On Nov 2, 2019 17:17, Strahil wrote:
I think I have a theory about ovirt1 , but I have to test it.
1. Ovirt1 has a team device controlled by NetworkManager , but in engine those
interfaces are unused
>What is your motivation to use a team dev
other ifcfg file or is a connection managed by networkmanager?
Best Regards,
Strahil Nikolov
On Nov 2, 2019 17:17, Strahil wrote:
I think I have a theory about ovirt1 , but I have to test it.
1. Ovirt1 has a team device controlled by NetworkManager , but in engine those
interfaces are unused
op-all-gluster-processes.sh
Verification:After the reboot , I have tried to set each oVirt storage domain
in 'Maintenance' which confirms that engine can update the OVMF meta and then
set it back to Active. Without downtime , this will not be possible.
I hope this long post will help anyone
fails - you got the necessary data in step 0 to rescue
the engine and as last resort you got an engine backup.
Good luck and don't forget to share your experience!
Best Regards,Strahil Nikolov
В сряда, 13 ноември 2019 г., 16:53:37 ч. Гринуич+2, Jonathan Mathews
написа:
Hi Pavel
Thank you
Have you checked for newer ISO ?I have noticed that sometimes Mircosoft fix
some stuff and provide newer media.
Best Regards,Strahil Nikolov
В сряда, 13 ноември 2019 г., 17:58:59 ч. Гринуич+2, gregor
написа:
Hello,
during the Installation of Windows Server 2019 after the first reboot
Ah... the host must be in maintenance mode to sync the networks.
Before syncing the network on the host check your routes, gateway,
/etc/resolv.conf and ip addresses in order to compare what was different
(before & after).
ip a sip routecat /etc/resolv.conf
Best Regards,Strahil Nikolov
as OVA or you will try to directly pull the
VMs ?
Best Regards,Strahil Nikolov
В четвъртък, 14 ноември 2019 г., 12:37:43 ч. Гринуич+2, wodel youchi
написа:
Hi,
We have a VMware vCenter platform based on Intel CPUs and we are planing to
migrate to oVirt.
Is it possible to migrate VMs
.
I guess I should restore the engine from the gluster snapshot and rollback via
'yum history undo last'.
Does anyone else have my issues ?
Best Regards,
Strahil Nikolov
On Nov 13, 2019 15:31, Sandro Bonazzola wrote:
Il giorno mer 13 nov 2019 alle ore 14:25 Sandro Bonazzola
ha scritto
Can you edit the VM ?If yes, remove the DVD and try again.
В петък, 22 ноември 2019 г., 14:50:22 ч. Гринуич+2, Ivan Apolonio
написа:
I'm unable to start VMs via command line without ISO domain. Please Help.
= ERROR
status: 409
reason: Conflict
detail:
ide -so do not hesitate to contact
both ovirt and gluster users' mailing lists in case of an issue.
Best Regards,Strahil Nikolov
В четвъртък, 21 ноември 2019 г., 17:01:51 ч. Гринуич+2, Timo Eissler
написа:
Hello,
i can't find the documentation on howto recover from a single host OS dis
Have you checked if the issue is specific to this host ?Also , check on
another VM.
Best Regards,Strahil Nikolov
В четвъртък, 21 ноември 2019 г., 17:31:33 ч. Гринуич+2, Don Dupuis
написа:
Anyone else have this problem? I haven't been able to find a solution.
Don
On Tue, Nov 19, 2019
be forwarded towards Sandro Bonazzola (sbona...@redhat.com).
Best Regards,Strahil Nikolov
В вторник, 3 декември 2019 г., 21:19:22 ч. Гринуич+2, Strahil Nikolov
написа:
Usually I add a small disk of type virtio and another NIC (also virtio).Then
if you install , you set the windows DVD
en
you complete that, "Change CD" again to the Windows Install iso and select the
disk you want to install your Windows.
Best Regards,Strahil Nikolov
В вторник, 3 декември 2019 г., 16:15:38 ч. Гринуич+2, Vijay Sachdeva
написа:
Anyone, please let me know. Really stuck at t
c1eba112-5eed-4c04-b25c-d3dcfb934546"
Is it possible to remove the vNICs , Virtual Network + and recreate the ovn db
to start over ?I guess the other option is to create a VM that can be used to
install python openstacksdk and modify via the python script from your previous
e-mail.
Best Reg
good copy to the
other bricks.You can also run a 'full heal'.
Best Regards,Strahil Nikolov
В събота, 14 декември 2019 г., 21:18:44 ч. Гринуич+2, Jayme
написа:
*Update*
Situation has improved. All VMs and engine are running. I'm left right now
with about 2 heal entries in each
According to GlusterFS Storage Domain the feature is not the default as it is
incompatible with Live Storage Migration.
Best Regards,Strahil Nikolov
В събота, 14 декември 2019 г., 17:06:32 ч. Гринуич+2, Jayme
написа:
Are there currently any known issues with using libgfapi
ove the maintenance , once you verify that all hosts have
running ovirt-ha-broker & ovirt-ha-agent.
Best Regards,Strahil Nikolov
В събота, 7 декември 2019 г., 22:55:14 ч. Гринуич+2, Stefan Wolf
написа:
the contet is
[root@kvm380 ~]# ls
/rhev/data-center/mnt/glusterSD/kvm380.durchhalten
ch of extending memory , but it should work.
Best Regards,Strahil Nikolov
В неделя, 8 декември 2019 г., 11:01:40 ч. Гринуич+2, Stefan Wolf
написа:
hello,
I ve decrease the memory of the hosted engine.
now I am not able to increase the memory permantly
right now the memory has 4096 MB
vn-25cc77-0 on port 3
/var/log/openvswitch/ovs-vswitchd.log:2019-12-12T01:58:15.138Z|00127|bridge|INFO|bridge
br-int: added interface ovn-25cc77-0 on port 6
I'm also attaching the verbose output of the dryrun.
Thanks in advance.
Best Regards,Strahil Nikolov[root@ovirt1 openvswitch]# ovs-vsctl --ver
I have CentOS 7 with qemu-guest-agent v2.12.0 and no warning is displayed.
As you use RHEL 7, you should have the same version.
Best Regards,Strahil Nikolov
В четвъртък, 24 октомври 2019 г., 20:32:48 ч. Гринуич+3,
написа:
Hi Petr,
Thank you by reply!
Is there any workarround
I have just tested booting a Win10 (originally using Virtio-SCSI) with disk
set to IDE . It boots slowly, but at least will allow you to install the virtio
scsi driver.
Best Regards,Strahil Nikolov
В вторник, 22 октомври 2019 г., 13:58:42 ч. Гринуич+3, matteo fedeli
написа:
&q
Not True,
my CentOS 7 has qemu-guest-agent and no warning is displayed.
# qemu-ga --version
QEMU Guest Agent 2.12.0
# rpm -qa | grep agent
qemu-guest-agent-2.12.0-3.el7.x86_64
Best Regards,Strahil Nikolov
В четвъртък, 24 октомври 2019 г., 18:30:29 ч. Гринуич+3, Petr Matyáš
написа
Hello Community,Dev,
does anyone know why only "Manual Migration" allows passing through the CPU
from the Host to the VM ?
In many environments, the Clusters are having the same CPU model in order to
avoid any issues. Have you observed any issues with that ?
Best Regards,Strahil
I guess nobody has an idea about this one ?
Best Regards,Strahil Nikolov
В неделя, 13 октомври 2019 г., 00:31:15 ч. Гринуич+3, Strahil Nikolov
написа:
Just a short update,
I have checked the other node (that has not been reinstall +deploy) and it also
has the same error:ovirt-ha
d (using "dd if=disk" or
qemu-img info) the disks of the rhel7 VM ?
Best Regards,Strahil Nikolov
В понеделник, 18 ноември 2019 г., 11:38:13 ч. Гринуич+2, Sahina Bose
написа:
On Mon, Nov 18, 2019 at 2:58 PM Sandro Bonazzola wrote:
+Sahina Bose +Gobinda Das +Nir Soffer
I would recommend you to "unpresent" that USB, then replug (if that's
possible) the USB and then to refresh the hosts's Capabilities (management
dropdown). Only then try to assign the USB.
Best Regards,Strahil Nikolov
В вторник, 19 ноември 2019 г., 05:30:00 ч. Гринуич+2, Don Dupui
Can you provide the contents of
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml ,as it
seems that I do not have it (maybe it's only available during deployment) ?
Best Regards,Strahil Nikolov
В понеделник, 18 ноември 2019 г., 12:42:26 ч. Гринуич+2,
rob.dow
Regards,Strahil Nikolov
>Hi Sahina,
>I have a strange situation:
>1. When I try to access the file via 'sudo -u vdsm dd if=disk of=test bs=4M'
>the command fails on aprox 60MB.
>2. If I run same command as root , remove the file and then run again via vdsm
>user -> this time
,Strahil Nikolov
В събота, 23 ноември 2019 г., 21:27:38 ч. Гринуич+2,
написа:
Gluster fails with
vdo: ERROR - Device /dev/sdb excluded by a filter.\n",
however I have run
[root@ovirt1 ~]# vdo create --name=vdo1 --device=/dev/sdb --force
Creating VDO vdo1
Starting VDO vdo1
Sta
Thanks for the idea - it was an extension that was synced from my chromebook's
account :)
Best Regards,Strahil Nikolov
В неделя, 24 ноември 2019 г., 14:50:14 ч. Гринуич+2, Joop
написа:
On 24-11-2019 12:52, Jayme wrote:
Chrome is my primary browser and I use oVirt admin portal
Hi Joseph,
have you tried to first to restart your HostedEngine ?
Best Regards,Strahil Nikolov
В понеделник, 25 ноември 2019 г., 01:14:51 ч. Гринуич+2, Joseph Goldman
написа:
Hi *,
Trying to figure out where a VM's lockstate is stored.
I have a few VM's that seem to be stuck
Have you checked in each host's -> Network Interfaces -> NIC if ovirtmgmt is
out of sync ?
Best Regards,Strahil Nikolov
В неделя, 1 декември 2019 г., 20:29:06 ч. Гринуич+2, Vijay Sachdeva
написа:
Hi Team,
Anyone having any Solution for this, please revert.
Thanks
On Sun, 1 De
/ovirt-ha-agent) will
detect the gluster cluster, once you add all nodes in oVirt.
Then you won't have any issues to manage the storage (although I prefer the cli
approach).
Best Regards,Strahil Nikolov
В неделя, 1 декември 2019 г., 16:37:23 ч. Гринуич+2, tho...@hoberg.net
написа:
Three Gen8
the comments in #1668163 about the chunk size of the cache.
Best Regards,Strahil Nikolov
В неделя, 1 декември 2019 г., 16:02:36 ч. Гринуич+2, Thomas Hoberg
написа:
Hi Gobinda,
unfortunately it's long gone, because I went back to an un-cached setup.
It was mostly a trial anyway, I had
Does sanlock user has rights on the ./dom_md/ids ?
Check the sanlock.service for issues.journalctl -u sanlock.service
Best Regards,Strahil Nikolov
В неделя, 1 декември 2019 г., 17:22:21 ч. Гринуич+2, rw...@ropeguru.com
написа:
I have a clean install with openmediavault as backend
ther “ovirtmgmt” untagged, and YES I checked they
are not out of sync. Although Ovirt-engine, using VDSM setup that but no luck.
Any suggestion, would be a great help..!!
Thanks
Vijay Sachdeva
From: Strahil Nikolov
Date: Monday, 2 December 2019 at 1:27 AM
To: , Vijay Sach
Hello Community,
I have a constantly loading chrome on my openSuSE 15.1 (and my android phone),
while firefox has no issues .
Can someone test accessing the oVirt Admin portal via chrome on x86_64 Linux ?
Best Regards,Strahil Nikolov___
Users mailing
maintenance.
Still the system was accessible.
Can you guide me which log reports the out-of-sync in order to further
investigate.
Best Regards,Strahil Nikolov
В вторник, 29 октомври 2019 г., 3:56:42 ч. Гринуич-4, Sandro Bonazzola
написа:
Il giorno mar 29 ott 2019 alle ore 04:58 Konstantin
e
result is the same for all my VMs.
Best Regards,Strahil Nikolov
В сряда, 20 ноември 2019 г., 18:17:18 ч. Гринуич+2, Strahil Nikolov
написа:
Hello All,
my engine is back online , but I'm still having difficulties to make vdsm
powerup the systems.I think that the events generated today ca
st is really down or you could cause a bigger
problem.
I'm not very sure that you are supposed to use the CEPH by giving each VM
direct access.
Have you considered using an iSCSI gateway as an entry point for your storage
domain ? This way oVirt will have no issues dealing with the rbd locks.
What is the output of 'gluster volume info data' ?
Best Regards,Strahil Nikolov
В събота, 12 октомври 2019 г., 16:48:49 ч. Гринуич+3, matteo fedeli
написа:
doing reset-brick - get this error:
Errore durante l'esecuzione dell'azione Start Gluster Volume Reset Brick:
Volume reset brick
the key type. Available types are:
['he_local', 'he_shared']
[root@ovirt1 ~]# hosted-engine --set-shared-config mnt_options
backup-volfile-servers=gluster2:ovirt3
Duplicate key mnt_options, please specify the key type. Available types are:
['he_local', 'he_shared']
Best Regards,
Strahil Nikolov
. You may have to add "force" at the end.
And last ,force a heal:gluster volume heal data full
Best Regards,Strahil Nikolov
В събота, 12 октомври 2019 г., 23:40:49 ч. Гринуич+3, matteo fedeli
написа:
What do you mean by same place? before this hdd was on /dev/md0 now on
/dev/sdc. N
Nikolov
В събота, 12 октомври 2019 г., 23:10:35 ч. Гринуич+3, Strahil Nikolov
написа:
Hi All,
After a host reinstall + deploy (UI -> Hosts -> Management -> Reinstall) I see
the following error in the ovirt-ha-broker :
ovirt-ha-broker mgmt_bridge.MgmtBridge ERROR Failed to ge
Hi All,
After a host reinstall + deploy (UI -> Hosts -> Management -> Reinstall) I see
the following error in the ovirt-ha-broker :
ovirt-ha-broker mgmt_bridge.MgmtBridge ERROR Failed to getVdsStats: No
'network' in result
Anyone got an idea what is it going about ? Should I worry about that
101 - 200 of 1578 matches
Mail list logo