Re: [ovirt-users] oVirt Node 4.1.6 on IBM x3650 M3

2017-10-26 Thread Jonathan Baecker
Thank you, for your commands! I have now also install CentOS minimal, 
this works. I only though that oVirt Node have some optimizations, but 
maybe not.


@Eduardo Mayoral, can I ask you that you are able with this servers, to 
use the power management? As I understand, they support ipmilan, but I 
don't know how...


Regards
Jonathan

Am 24.10.2017 um 23:41 schrieb Sean McMurray:
I have seen this problem before. For some reason, oVirt Node 4.1.x 
does not always install everything right for efi. In my limited 
experience, it fails to do it correctly 4 out of 5 times. The mystery 
to me is why it gets it right sometimes. I solve the problem by 
manually copying the missing file into my efi boot partition.



On 10/24/2017 12:46 PM, Eduardo Mayoral wrote:


3 of my compute nodes are IBM x3650 M3 . I do not use oVirt Node but 
rather plain CentOS 7 for the compute nodes. I use 4.1.6 too.


I remember I had a bad time trying to disable UEFI on the BIOS of 
those servers. In my opinion, the firmware in that model ridden with 
problems. In the end, I installed with UEFI (You will need a 
/boot/efi partition)


Once installed, I have not had any issues with them.

Eduardo Mayoral Jimeno (emayo...@arsys.es)
Administrador de sistemas. Departamento de Plataformas. Arsys internet.
+34 941 620 145 ext. 5153
On 24/10/17 09:57, Jon bae wrote:

Hello everybody,
I would like to install oVirt Node on a IBM Machine, but after the 
installation it can not boot. I get the message:


"/boot/efi/..." file not found

I try many different things like turn of uefi options in bios etc. 
but with no effect.


Now I figure out that when I install full CentOS 7.3 from live DVD 
it just boot normal.


Is there any workaround to get this to work?

Regards

Jonathan


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Node 4.1.6 on IBM x3650 M3

2017-10-26 Thread Jonathan Baecker

Thank you, good to know that this works! I need to play a bit with it.


Am 26.10.2017 um 21:59 schrieb Eduardo Mayoral:

Yes, I use power management with ipmilan, no issues.

I do have license on the IMM for remote console, but that is not a
requirement, AFAIK.

I remember I first tried to use for oVirt a dedicated login on the IMM
with just "Remote Server Power/Restart Access" and I could not get to
work, so I just granted "Supervisor" to the dedicated login. Other than
that, no problem.

Eduardo Mayoral Jimeno (emayo...@arsys.es)
Administrador de sistemas. Departamento de Plataformas. Arsys internet.
+34 941 620 145 ext. 5153

On 26/10/17 21:47, Jonathan Baecker wrote:

Thank you, for your commands! I have now also install CentOS minimal,
this works. I only though that oVirt Node have some optimizations, but
maybe not.

@Eduardo Mayoral, can I ask you that you are able with this servers,
to use the power management? As I understand, they support ipmilan,
but I don't know how...

Regards
Jonathan

Am 24.10.2017 um 23:41 schrieb Sean McMurray:

I have seen this problem before. For some reason, oVirt Node 4.1.x
does not always install everything right for efi. In my limited
experience, it fails to do it correctly 4 out of 5 times. The mystery
to me is why it gets it right sometimes. I solve the problem by
manually copying the missing file into my efi boot partition.


On 10/24/2017 12:46 PM, Eduardo Mayoral wrote:

3 of my compute nodes are IBM x3650 M3 . I do not use oVirt Node but
rather plain CentOS 7 for the compute nodes. I use 4.1.6 too.

I remember I had a bad time trying to disable UEFI on the BIOS of
those servers. In my opinion, the firmware in that model ridden with
problems. In the end, I installed with UEFI (You will need a
/boot/efi partition)

Once installed, I have not had any issues with them.

Eduardo Mayoral Jimeno (emayo...@arsys.es)
Administrador de sistemas. Departamento de Plataformas. Arsys internet.
+34 941 620 145 ext. 5153
On 24/10/17 09:57, Jon bae wrote:

Hello everybody,
I would like to install oVirt Node on a IBM Machine, but after the
installation it can not boot. I get the message:

"/boot/efi/..." file not found

I try many different things like turn of uefi options in bios etc.
but with no effect.

Now I figure out that when I install full CentOS 7.3 from live DVD
it just boot normal.

Is there any workaround to get this to work?

Regards

Jonathan


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Node 4.1.6 on IBM x3650 M3

2017-11-02 Thread Jonathan Baecker

Hello Yuval,
sorry for my late response! I installed from a USB stick... I try it now 
again on a second machine and the exact message is:


   *Failed to open \EFI\centos\grubx64.efi - Not Found**
   **Fail to load image \EFI\centos\grubx64.efi - Not Found**
   **start... image() returned Not Found*

What Brian Barry recommend didn't work for me to. I booted again from 
the stick and choice Troubleshooting, there I can select "1", this 
should mount my installation to a folder, but the starting process ended 
up in a loop, where error comes up saying something about error on line 
9. After a second round I lost my patience and I install again the 
normal centos minimal.


Is there any performance benefit with installing from oVirt Node?

Regards
Jonathan


Am 31.10.2017 um 09:46 schrieb Yuval Turgeman:

Hi,

We did have some problems in the past with efi, but they should be 
fixed by now.
Did you use the ISO for installation ?  What error are you seeing - 
which file is missing there ?


Thanks,
Yuval.

On Thu, Oct 26, 2017 at 11:03 PM, Jonathan Baecker <mailto:jonba...@gmail.com>> wrote:


Thank you, good to know that this works! I need to play a bit with
it.



Am 26.10.2017 um 21:59 schrieb Eduardo Mayoral:

Yes, I use power management with ipmilan, no issues.

I do have license on the IMM for remote console, but that is not a
requirement, AFAIK.

I remember I first tried to use for oVirt a dedicated login on
the IMM
with just "Remote Server Power/Restart Access" and I could not
get to
work, so I just granted "Supervisor" to the dedicated login.
Other than
that, no problem.

Eduardo Mayoral Jimeno (emayo...@arsys.es
<mailto:emayo...@arsys.es>)
Administrador de sistemas. Departamento de Plataformas. Arsys
internet.
+34 941 620 145 ext. 5153
    

On 26/10/17 21:47, Jonathan Baecker wrote:

Thank you, for your commands! I have now also install
CentOS minimal,
this works. I only though that oVirt Node have some
optimizations, but
maybe not.

@Eduardo Mayoral, can I ask you that you are able with
this servers,
to use the power management? As I understand, they support
ipmilan,
but I don't know how...

Regards
Jonathan

Am 24.10.2017 um 23:41 schrieb Sean McMurray:

I have seen this problem before. For some reason,
oVirt Node 4.1.x
does not always install everything right for efi. In
my limited
experience, it fails to do it correctly 4 out of 5
times. The mystery
to me is why it gets it right sometimes. I solve the
problem by
manually copying the missing file into my efi boot
partition.


On 10/24/2017 12:46 PM, Eduardo Mayoral wrote:

3 of my compute nodes are IBM x3650 M3 . I do not
use oVirt Node but
rather plain CentOS 7 for the compute nodes. I use
4.1.6 too.

I remember I had a bad time trying to disable UEFI
on the BIOS of
those servers. In my opinion, the firmware in that
model ridden with
problems. In the end, I installed with UEFI (You
will need a
/boot/efi partition)

Once installed, I have not had any issues with them.

Eduardo Mayoral Jimeno (emayo...@arsys.es
<mailto:emayo...@arsys.es>)
Administrador de sistemas. Departamento de
Plataformas. Arsys internet.
+34 941 620 145 ext. 5153

On 24/10/17 09:57, Jon bae wrote:

Hello everybody,
I would like to install oVirt Node on a IBM
Machine, but after the
installation it can not boot. I get the message:

"/boot/efi/..." file not found

I try many different things like turn of uefi
options in bios etc.
but with no effect.

Now I figure out that when I install full
CentOS 7.3 from live DVD
it just boot normal.

Is there any workaround to get this to work?

Regards

Jonathan


___
Users maili

[ovirt-users] Sync two Nodes

2017-11-02 Thread Jonathan Baecker

Hello everybody,

I would like to sync two nodes, but I want that only one node runs 
permanently. Only once a week or a month I want to start the second node 
and sync them again, if this is necessary.


What you would recommend for this scenario? oVirt have a HA functions, 
what I could use, but I thought oVirt brings then maybe errors when one 
node is always off. I'm wrong here? Or is there other options what works 
better?


Have a nice day!

Jonathan

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Sync two Nodes

2017-11-02 Thread Jonathan Baecker

Thank you for clarification!
Am 02.11.2017 um 20:55 schrieb Arman Khalatyan:

Ovirt HA means that if you have virtual machine running on the ovirts
environment(let us say 2 nodes) then if the bare metal gets troubles,
VM will be restarted on the second one, the failed host must be
fenced: poweroff/reboot, but the HA model assumes that the both bare
metal machines are always on and healthy.  if second host is off, then
simply you dont have HA, you should ask someone  to turn on the second
host in order to rerun your VMs.:)
Usually if you turn off the "healthy host"  it does not have any
information to sync,  the ovirt-engine manages all things.

Ok, then HA is not the right choice.


(Maybe question belongs to the wrong forum?)
The ovirt does not contain any sync / HA functionality in the data side.
Maybe you are looking for some ha/failover-file systems like a
glusterfs(geo-replication) or drbd(real-time replication) or
zfs: send receive(smart backups+snapshots) or some thing similar.
When I understand your right, then there is no necessary data on the 
nodes, all information have the ovirt engine? My VM images are on a nfs 
share, at the moment.
When one node crashes I can just migrate the VM to the second node? That 
would be wonderful!

On Thu, Nov 2, 2017 at 8:20 PM, Jonathan Baecker  wrote:

Hello everybody,

I would like to sync two nodes, but I want that only one node runs
permanently. Only once a week or a month I want to start the second node and
sync them again, if this is necessary.

What you would recommend for this scenario? oVirt have a HA functions, what
I could use, but I thought oVirt brings then maybe errors when one node is
always off. I'm wrong here? Or is there other options what works better?

Have a nice day!

Jonathan

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Why Node was rebooting?

2017-11-25 Thread Jonathan Baecker

Hello community,

yesterday evening one of our nodes was rebooted, but I have not found 
out why. The engine only reports this:


   24.11.2017 22:01:43 Storage Pool Manager runs on Host onode-1
   (Address: onode-1.worknet.lan).
   24.11.2017 21:58:50 Failed to verify Host onode-1 power management.
   24.11.2017 21:58:50 Status of host onode-1 was set to Up.
   24.11.2017 21:58:41 Successfully refreshed the capabilities of
   host onode-1.
   24.11.2017 21:58:37 VDSM onode-1 command GetCapabilitiesVDS
   failed: Client close
   24.11.2017 21:58:37 VDSM onode-1 command
   HSMGetAllTasksStatusesVDS failed: Not SPM: ()
   24.11.2017 21:58:22 Host onode-1 is rebooting.
   24.11.2017 21:58:22 Kdump flow is not in progress on host onode-1.
   24.11.2017 21:57:51 Host onode-1 is non responsive.
   24.11.2017 21:57:51 VM playout was set to the Unknown status.
   24.11.2017 21:57:51 VM gogs was set to the Unknown status.
   24.11.2017 21:57:51 VM Windows2008 was set to the Unknown status.
   [...]

There is no crash report, and no relevant errors in dmesg.

Does the engine send a reboot command to the node, when it gets no 
responds? Is there any other way to found out why the node was 
rebooting? The node hangs on a usv and all other servers was running well...


In the time, when the reboot was happen, I had a bigger video 
compression job in one of the VMs, so maybe the CPUs got a bit stressed, 
but they are not over committed.



Regards

Jonathan

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Why Node was rebooting?

2017-11-25 Thread Jonathan Baecker
I do setup power management, but because the second node if off, it's 
working not correctly. I will install now a vm on a different server, 
just for using it as a proxy.

But you think this can be the reason?


Am 25.11.2017 um 20:36 schrieb Charles Kozler:

Did you setup fencing?

I've also seen this behavior with stressed CPU and NMI watch dog in 
BIOS rebooting a server but that was on freebsd. Have not seen it on 
Linux


On Nov 25, 2017 2:07 PM, "Jonathan Baecker" <mailto:jonba...@gmail.com>> wrote:


Hello community,

yesterday evening one of our nodes was rebooted, but I have not
found out why. The engine only reports this:

24.11.2017 22:01:43 Storage Pool Manager runs on Host
onode-1 (Address: onode-1.worknet.lan).
24.11.2017 21:58:50 Failed to verify Host onode-1 power
management.
24.11.2017 21:58:50 Status of host onode-1 was set to Up.
24.11.2017 21:58:41 Successfully refreshed the
capabilities of host onode-1.
24.11.2017 21:58:37 VDSM onode-1 command
GetCapabilitiesVDS failed: Client close
24.11.2017 21:58:37 VDSM onode-1 command
HSMGetAllTasksStatusesVDS failed: Not SPM: ()
24.11.2017 21:58:22 Host onode-1 is rebooting.
24.11.2017 21:58:22 Kdump flow is not in progress on host
onode-1.
24.11.2017 21:57:51 Host onode-1 is non responsive.
24.11.2017 21:57:51 VM playout was set to the Unknown status.
24.11.2017 21:57:51 VM gogs was set to the Unknown status.
24.11.2017 21:57:51 VM Windows2008 was set to the Unknown
status.
[...]

There is no crash report, and no relevant errors in dmesg.

Does the engine send a reboot command to the node, when it gets no
responds? Is there any other way to found out why the node was
rebooting? The node hangs on a usv and all other servers was
running well...

In the time, when the reboot was happen, I had a bigger video
compression job in one of the VMs, so maybe the CPUs got a bit
stressed, but they are not over committed.


Regards

Jonathan


___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Host can not boot any more with current kernel

2018-03-15 Thread Jonathan Baecker

Hello everybody,

Today I update my engine, from 4.2.1.6 to 4.2.1.7 and later I update two 
hosts. All are running on CentOS 7.4


Now they have the kernel 3.10.0-693.21, one host I can start normal and 
the other one always reboot short after the menu where I can select the 
kernel. There is not even an error message and I can not switch to the 
boot log screen.


The same behavior I have with kernel 3.10.0-693.17. Kernel *.11 and 
older are working.


Have any body experience this issue and know what to do?


Regards

Jonathan

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] One Windows VM is not responding

2018-08-26 Thread Jonathan Baecker

Hello Everybody!

We have here 13 VMs running under ovirt 4.2.4. Two of them are Windows 
Server 2016. On one runs a AD, DNS and one application.


On the second one runs an SQL Server and also an application. This 
second one have the problem, that periodical it goes in a state where I 
get in ovirt the message: *not responding

*

Beside this the VM is running normal. I can connect it over Remote 
Desktop. But I can not connect it with ovirt/noVNC, the button is gray out.


In a weekly cycle it runs a backup script, maybe this brings some 
problems, but I really don't know how to debug this. It can be that the 
VM runs for 3 weeks normal, and then it goes in this state, so I can not 
really say when this is happen. Also the fact that the other Windows 
Server VM runs normal, wonders me.


Do you have experienced this problem, or do you know how I found out the 
issue?


Best Regards!


Jonathan

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VPW4CMJADWSO3ADR35IR3NFJ4JRAYIK3/


[ovirt-users] Re: One Windows VM is not responding

2018-08-26 Thread Jonathan Baecker
Sorry, I forgot to add this information. The VM have the oVirt Guest 
Tools 4.2-1.el7.centos installed.


In the VM Log I also see now, that the last snapshot (for backup) has 
failed.



Am 26.08.2018 um 22:45 schrieb Wesley Stewart:
Possibly a dumb question, but are you running the last oVirt agent 
guest tools?


On Sun, Aug 26, 2018, 12:47 PM Jonathan Baecker <mailto:jonba...@gmail.com>> wrote:


Hello Everybody!

We have here 13 VMs running under ovirt 4.2.4. Two of them are
Windows Server 2016. On one runs a AD, DNS and one application.

On the second one runs an SQL Server and also an application. This
second one have the problem, that periodical it goes in a state
where I get in ovirt the message: *not responding
*

Beside this the VM is running normal. I can connect it over Remote
Desktop. But I can not connect it with ovirt/noVNC, the button is
gray out.

In a weekly cycle it runs a backup script, maybe this brings some
problems, but I really don't know how to debug this. It can be
that the VM runs for 3 weeks normal, and then it goes in this
state, so I can not really say when this is happen. Also the fact
that the other Windows Server VM runs normal, wonders me.

Do you have experienced this problem, or do you know how I found
out the issue?

Best Regards!


Jonathan

___
Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-le...@ovirt.org
<mailto:users-le...@ovirt.org>
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/VPW4CMJADWSO3ADR35IR3NFJ4JRAYIK3/



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3IRZOM5DRNCZV5Z5LQ2RWC57F6MKUTIV/


[ovirt-users] cockpit-networkmanager

2018-10-06 Thread Jonathan Baecker

Hello Everybody,

I only wanted to ask you, if the ovirt hosts does need the 
cockpit-networkmanager?


I ask because I can not updates my CentOS hosts, I get always the message:

   Transaction check error:
  file /usr/share/cockpit/networkmanager/manifest.json from install
   of cockpit-system-176-2.el7.centos.noarch conflicts with file from
   package cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.ca.js.gz from install
   of cockpit-system-176-2.el7.centos.noarch conflicts with file from
   package cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.cs.js.gz from install
   of cockpit-system-176-2.el7.centos.noarch conflicts with file from
   package cockpit-networkmanager-172-1.el7.noarch

   ...

When I remove the cockpit-networkmanager the error is gone, but after 
making a yum update I'm not able to reinstall the cockpit-networkmanager 
because it is still want to use the old version.



Jonathan



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OWRBEVMYWG2H6QY4N7PGD6LVK3X3OQ4K/


[ovirt-users] Re: Fence Agent in Virtual Environment

2020-11-05 Thread Jonathan Baecker

Am 05.11.20 um 19:19 schrieb Strahil Nikolov:

You need to enable HA for the VM.


Yes this I know and I had this on. When I set the host in maintenance 
mode, the VM was moving to another host, but not when I kill the host.



About the XVM , I think that you first need to install it on all hosts and then check in 
UI, if you can find that fence agent in "Power Management".


Thanks, I will try this!

Best Regard

Jonathan





Best Regards,
Strahil Nikolov






В четвъртък, 5 ноември 2020 г., 18:41:40 Гринуич+2, jb  
написа:





Yes I know, is just a guest... I wanted to test what is happen with a
VM, when I kill it's host. After that, I have not seen that the VM is
moving to another host.

So I thought maybe ovirt needs the power management for that.

I have read about fence_xvm, but I don't know how to configure oVirt
with that.



Am 05.11.20 um 17:11 schrieb Strahil Nikolov:

This is just a guess , but you might be able to install fence_xvm on all 
Virtualized Hosts .

Best Regards,
Strahil Nikolov






В четвъртък, 5 ноември 2020 г., 16:00:40 Гринуич+2, jb  
написа:





Hello,

I would like to build a hyperconverged gluster with hosted engine in a
virtual environment, on Fedora 33 with KVM.

The setup is for testing purposes, specially for test upgrades before
running them on the real physical Servers. But I want to have the setup
as close as possible to the real environment, so the only thing is
missing is a fence agent.

Is there a way to simulate power management in a virtual environment?


Jonathan
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/V5SHGKNLTK24DQ3G7ZX6AGKIHLNCFS2J/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FQGCUE3AGIZRGIQGW4VNJO6I7L6JKBZ3/


[ovirt-users] OVA export issues

2020-12-19 Thread Jonathan Baecker

Hello,

I have here an older server, one host with external engine, 4.4.4-1.el8 
installation, with a slow 2Gbit nfs storage. When I export OVAs from 
smaller VMs (under 40GB) it works but with bigger one, around 95GB I 
have problems.


I have already change the export target to a local folder, which makes 
the process faster and the GUI shows no errors, but when I extract the 
vm.ovf file, from the exported archive, it is not complete. It missing a 
closing part from the xml definition. The end stops here: *

By my test before, I was able to add the closing part by hand and import 
the OVA, but then the VM had no disk attached. I don't know if the size 
is the problem, my backup script which export the VMs to the export 
domain works normal. I only know, that the error I can reproduce on 
minimum 2 VMs.


The last part of the export log looks like:

   /var/log/ovirt-engine/ansible-runner-service.log:2020-12-19
   12:02:52,610 - runner_service.services.playbook - DEBUG -
   cb_event_handler event_data={'uuid':
   '2ac62a85-7efa-4d14-a941-c1880bd016fd', 'counter': 35, 'stdout':
   'changed: [onode-2.example.org]', 'start_line': 34, 'end_line': 35,
   'runner_ident': '9ac880ce-4219-11eb-bfd8-5254000e4c2c', 'event':
   'runner_on_ok', 'pid': 1039034, 'created':
   '2020-12-19T17:02:52.606639', 'parent_uuid':
   '5254000e-4c2c-ed4e-fdf7-0022', 'event_data': {'playbook':
   'ovirt-ova-export.yml', 'playbook_uuid':
   'ee4a698c-f639-49c8-8fa9-af2778f0862d', 'play': 'all', 'play_uuid':
   '5254000e-4c2c-ed4e-fdf7-0007', 'play_pattern': 'all',
   'task': 'Rename the OVA file', 'task_uuid':
   '5254000e-4c2c-ed4e-fdf7-0022', 'task_action': 'command',
   'task_args': '', 'task_path':
   
'/usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-ova-export-post-pack/tasks/main.yml:2',
   'role': 'ovirt-ova-export-post-pack', 'host':
   'onode-2.discovery.intern', 'remote_addr':
   'onode-2.discovery.intern', 'res': {'cmd': ['mv',
   '/mnt/intern/win2016-01.ova.tmp', '/mnt/intern/win2016-01.ova'],
   'stdout': '', 'stderr': '', 'rc': 0, 'start': '2020-12-19
   12:02:52.560917', 'end': '2020-12-19 12:02:52.572358', 'delta':
   '0:00:00.011441', 'changed': True, 'invocation': {'module_args':
   {'_raw_params': 'mv "/mnt/intern/win2016-01.ova.tmp"
   "/mnt/intern/win2016-01.ova"', 'warn': True, '_uses_shell': False,
   'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None,
   'chdir': None, 'executable': None, 'creates': None, 'removes': None,
   'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [],
   '_ansible_no_log': False}, 'start': '2020-12-19T17:02:51.954144',
   'end': '2020-12-19T17:02:52.605820', 'duration': 0.651676,
   'event_loop': None, 'uuid': '2ac62a85-7efa-4d14-a941-c1880bd016fd'}}

Any ideas?


Best regards

Jonathan



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KGA5H7F77EH6MREIEB55TBFDOJOUG6KD/


[ovirt-users] Upgrade to 4.4.4

2020-12-22 Thread Jonathan Baecker

Hello,

I'm running here a upgrade from 4.4.3 to latest 4.4.4, on a 3 node self 
hosted cluster. The engine upgrade went fine and now I'm on host 
upgrades. When I check there the updates it shows only 
*ovirt-node-ng-image-update-4.4.4-1.el8.noarch.rpm*. For that I have run 
manual updates on each host, with maintenance mode -> yum update -> reboot.


When I run now on the engine *cat /etc/redhat-release *it show:

   *CentOS Linux release 8.3.2011*

But on my nodes it shows still:

   *CentOS Linux release 8.2.2004 (Core)*

How can this be?


Best regards

Jonathan


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5N47O4OU2TERGQMI5QYVENEMHMHNCB47/


[ovirt-users] Re: Upgrade to 4.4.4

2020-12-22 Thread Jonathan Baecker

Am 22.12.20 um 21:37 schrieb Jonathan Baecker:


Hello,

I'm running here a upgrade from 4.4.3 to latest 4.4.4, on a 3 node 
self hosted cluster. The engine upgrade went fine and now I'm on host 
upgrades. When I check there the updates it shows only 
*ovirt-node-ng-image-update-4.4.4-1.el8.noarch.rpm*. For that I have 
run manual updates on each host, with maintenance mode -> yum update 
-> reboot.


When I run now on the engine *cat /etc/redhat-release *it show:

*CentOS Linux release 8.3.2011*

But on my nodes it shows still:

*CentOS Linux release 8.2.2004 (Core)*


After running yum update it also show this error:

   Transaktion wird ausgeführt
  Vorbereitung läuft : 1/1
  Ausgeführtes Scriptlet:
   ovirt-node-ng-image-update-4.4.4-1.el8.noarch 1/2
  Installieren  :
   ovirt-node-ng-image-update-4.4.4-1.el8.noarch 1/2
  Ausgeführtes Scriptlet:
   ovirt-node-ng-image-update-4.4.4-1.el8.noarch 1/2
   Warnung: %post(ovirt-node-ng-image-update-4.4.4-1.el8.noarch)
   Scriptlet fehlgeschlagen, Beenden-Status 1

   Error in POSTIN scriptlet in rpm package ovirt-node-ng-image-update
  Veraltet  :
   ovirt-node-ng-image-update-placeholder-4.4.3-2.el8.noarch 2/2
  Überprüfung läuft :
   ovirt-node-ng-image-update-4.4.4-1.el8.noarch 1/2
  Überprüfung läuft :
   ovirt-node-ng-image-update-placeholder-4.4.3-2.el8.noarch 2/2
   Unpersisting:
   ovirt-node-ng-image-update-placeholder-4.4.3-2.el8.noarch.rpm

   Installiert:
   ovirt-node-ng-image-update-4.4.4-1.el8.noarch

   Fertig.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y2QQR4ULJCQ42VWEQBLWEF6O2IFUJ4Z2/


[ovirt-users] Re: Upgrade to 4.4.4

2020-12-22 Thread Jonathan Baecker

Am 22.12.20 um 21:50 schrieb Jonathan Baecker:

Am 22.12.20 um 21:37 schrieb Jonathan Baecker:


Hello,

I'm running here a upgrade from 4.4.3 to latest 4.4.4, on a 3 node 
self hosted cluster. The engine upgrade went fine and now I'm on host 
upgrades. When I check there the updates it shows only 
*ovirt-node-ng-image-update-4.4.4-1.el8.noarch.rpm*. For that I have 
run manual updates on each host, with maintenance mode -> yum update 
-> reboot.


When I run now on the engine *cat /etc/redhat-release *it show:

*CentOS Linux release 8.3.2011*

But on my nodes it shows still:

*CentOS Linux release 8.2.2004 (Core)*


After running yum update it also show this error:

Transaktion wird ausgeführt
  Vorbereitung läuft : 1/1
  Ausgeführtes Scriptlet:
ovirt-node-ng-image-update-4.4.4-1.el8.noarch 1/2
  Installieren  :
ovirt-node-ng-image-update-4.4.4-1.el8.noarch 1/2
  Ausgeführtes Scriptlet:
ovirt-node-ng-image-update-4.4.4-1.el8.noarch 1/2
Warnung: %post(ovirt-node-ng-image-update-4.4.4-1.el8.noarch)
Scriptlet fehlgeschlagen, Beenden-Status 1

Error in POSTIN scriptlet in rpm package ovirt-node-ng-image-update
  Veraltet  :
ovirt-node-ng-image-update-placeholder-4.4.3-2.el8.noarch 2/2
  Überprüfung läuft :
ovirt-node-ng-image-update-4.4.4-1.el8.noarch 1/2
  Überprüfung läuft :
ovirt-node-ng-image-update-placeholder-4.4.3-2.el8.noarch 2/2
Unpersisting:
ovirt-node-ng-image-update-placeholder-4.4.3-2.el8.noarch.rpm

Installiert:
ovirt-node-ng-image-update-4.4.4-1.el8.noarch

Fertig.

Ok I got it... I had some programs installed, from disabled repos, like 
nano; git; etc. So I needed to remove all of them in 
/var/imgbased/persisted-rpms, and after that I could run:


dnf reinstall 
/var/cache/dnf/ovirt-4.4-8fb26fb2b8638243/packages/ovirt-node-ng-image-update-4.4.4-1.el8.noarch.rpm



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/335TSKTHO42O5PH6WOCZR3BG2QLYGFSE/


[ovirt-users] Re: Creating Snapshots failed

2021-06-02 Thread Jonathan Baecker

Am 02.06.21 um 19:08 schrieb Strahil Nikolov:

Most probably it does.

Can you try to restart the engine via :
systemctl restart ovirt-engine


Thank you for the hint!

I did, and now I can create a snapshot. But the old locked snapshot 
disks I still can not delete.


Jonathan





Best Regards,
Strahil Nikolov

On Tue, Jun 1, 2021 at 17:27, jb
 wrote:
___
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org

Privacy Statement: https://www.ovirt.org/privacy-policy.html

oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/

List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/OLSGTQBD7XEHENSKYVSQ3RIVOURD4PVX/





___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3SPN5J2M26CALO26ILHSUN7SKOE3JUXM/


[ovirt-users] Re: Creating Snapshots failed

2021-06-02 Thread Jonathan Baecker

Am 02.06.21 um 22:46 schrieb Strahil Nikolov:

You can try to use the unlock_entity.sh

# cd /usr/share/ovirt-engine/setup/dbutils
# source /etc/ovirt-engine/engine.conf.d/10-setup-database.conf
# export PGPASSWORD=$ENGINE_DB_PASSWORD
# ./unlock_entity.sh -h

# ./unlock_entity.sh -u engine -t disk -q

Source: https://access.redhat.com/solutions/396753


Sadly this not helped.

This:

# ./unlock_entity.sh -t all -q

shows also no Locked results.




Best Regards,
Strahil Nikolov


В сряда, 2 юни 2021 г., 22:57:36 ч. Гринуич+3, Jonathan Baecker 
 написа:






Am 02.06.21 um 19:08 schrieb Strahil Nikolov:


   

Most probably it does.



Can you try to restart the engine via :

systemctl restart ovirt-engine

Thank you for the hint!


I did, and now I can create a snapshot. But the old locked snapshot disks I 
still can not delete.

Jonathan




   







Best Regards,

Strahil Nikolov



   
   
On Tue, Jun 1, 2021 at 17:27, jb


 wrote:


   ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OLSGTQBD7XEHENSKYVSQ3RIVOURD4PVX/









___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UAMDVSNNHJRD5X233FXIM55NA3GCJFBF/


[ovirt-users] Re: Creating Snapshots failed

2021-06-02 Thread Jonathan Baecker

Am 02.06.21 um 22:46 schrieb Strahil Nikolov:

You can try to use the unlock_entity.sh

# cd /usr/share/ovirt-engine/setup/dbutils
# source /etc/ovirt-engine/engine.conf.d/10-setup-database.conf
# export PGPASSWORD=$ENGINE_DB_PASSWORD
# ./unlock_entity.sh -h

# ./unlock_entity.sh -u engine -t disk -q

Source: https://access.redhat.com/solutions/396753

Ok, ./unlock_entity.sh -t all worked. Thanks again!



Best Regards,
Strahil Nikolov


В сряда, 2 юни 2021 г., 22:57:36 ч. Гринуич+3, Jonathan Baecker 
 написа:






Am 02.06.21 um 19:08 schrieb Strahil Nikolov:


   

Most probably it does.



Can you try to restart the engine via :

systemctl restart ovirt-engine

Thank you for the hint!


I did, and now I can create a snapshot. But the old locked snapshot disks I 
still can not delete.

Jonathan




   







Best Regards,

Strahil Nikolov



   
   
On Tue, Jun 1, 2021 at 17:27, jb


 wrote:


   ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OLSGTQBD7XEHENSKYVSQ3RIVOURD4PVX/









___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NMUPDBJG2T7SND4CN7MY4YRM4FU5Z6SQ/


[ovirt-users] Re: Mount Export Domain only temporary

2021-06-24 Thread Jonathan Baecker

Thanks, Strahil!

Am 23.06.21 um 17:33 schrieb Strahil Nikolov:

You can mount the NFS outside oVirt and backup to it


But this is only possible with OVA export, right? And not the 
traditional way with making a snapshot and export the VM to the export 
domain.



Also, you can use pacemaker (if the NFS is a linux server) to ensure 
that failover is immediate and with NFS v3 the recovery of the clients 
should be fast enough to avoid troubles.



Best Regards,
Strahil Nikolov

On Wed, Jun 23, 2021 at 11:51, jb
 wrote:
Hello community,

We use a NFS export domain for full backups. Sometimes I have to
restart
the NFS file server, but when I just restart that server, without
doing
anything in oVirt (I guess normally I should put the domain in
maintenance mode), it can happen that hosts and VMs a crashing.

I had one time even the issue, that after a crash one XFS file
system on
one host got damage and I had to fix that.

My though was now, because I make only one time a week a VM full
backup,
I only mount the export domain in this time.

Is there an easy way to do this, with the API? Or would you have a
other
hint, how to solve this issue?


Have a good day!

Jonathan

___
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org

Privacy Statement: https://www.ovirt.org/privacy-policy.html

oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/

List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/QAFV3KKXAMYOESEPGPWMCSNUTA5LH2JV/



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SBB3FRVK6W37AJLFFMRANZCQZOZFJ66B/


[ovirt-users] Re: Mount Export Domain only temporary

2021-06-24 Thread Jonathan Baecker


Am 25.06.21 um 00:18 schrieb Jayme:
A while ago I wrote an ansible playbook to backup ovirt vms via ova 
export to storage attached to one of the hosts (nfs in my case). You 
can check it out here:
https://github.com/silverorange/ovirt_ansible_backup 
<https://github.com/silverorange/ovirt_ansible_backup>


I’ve been using this for a while and it has been working well for me 
on ovirt 4.3 and 4.4

Thank you! It look interesting, I will give it a try!




On Thu, Jun 24, 2021 at 4:52 PM Strahil Nikolov via Users 
mailto:users@ovirt.org>> wrote:


I think that you can use the API to do backup to the "local" NFS.

Most probably you can still snapshot and copy the disks , but this
will require a lot of effort to identify the read only disks and
copy them.

Best Regards,
Strahil Nikolov

On Thu, Jun 24, 2021 at 10:11, Jonathan Baecker
mailto:jonba...@gmail.com>> wrote:
___
Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-le...@ovirt.org
<mailto:users-le...@ovirt.org>
Privacy Statement: https://www.ovirt.org/privacy-policy.html
<https://www.ovirt.org/privacy-policy.html>
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
<https://www.ovirt.org/community/about/community-guidelines/>
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/SBB3FRVK6W37AJLFFMRANZCQZOZFJ66B/

<https://lists.ovirt.org/archives/list/users@ovirt.org/message/SBB3FRVK6W37AJLFFMRANZCQZOZFJ66B/>


___
Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-le...@ovirt.org
<mailto:users-le...@ovirt.org>
Privacy Statement: https://www.ovirt.org/privacy-policy.html
<https://www.ovirt.org/privacy-policy.html>
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
<https://www.ovirt.org/community/about/community-guidelines/>
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/CGTTVQYZMA4ET3YH6II7KLGT64QOUJHK/

<https://lists.ovirt.org/archives/list/users@ovirt.org/message/CGTTVQYZMA4ET3YH6II7KLGT64QOUJHK/>

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EY627AFMXJXFS74GC3SIE4OY5R3UL26M/


[ovirt-users] low power, low cost glusterfs storage

2019-03-03 Thread Jonathan Baecker

Hello everybody!

Does anyone here have experience with a cheap, energy-saving glusterfs 
storage solution? I'm thinking of something that has more power than a 
rasbian Pi, 3 x 2 TB (SSD) storage, but doesn't cost much more and 
doesn't consume much more power.


Would that be possible? I know the "Red Hat Gluster Storage" 
requirements, but are they generally so high? Only a few VM images would 
have to be on it...


Greetings

Jonathan
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NWOA24SHB2CV6SDVBIYPL5PJELJDZRND/


[ovirt-users] Re: VM Snapshots not erasable and not bootable

2019-04-14 Thread Jonathan Baecker

Am 14.04.2019 um 12:13 schrieb Eyal Shenitzky:

Seems like your SPM went down while you had running Live merge operation.

Can you please submit a bug and attach the logs?

Yes I can do - but you really think this is a bug? Because in that time 
I had only one host running, so this was the SPM. And the time in the 
log is exactly this time when the host was restarting. But the merging 
jobs and snapshot deleting was starting ~20 hours before.



On Sun, Apr 14, 2019 at 9:40 AM Jonathan Baecker <mailto:jonba...@gmail.com>> wrote:


Am 14.04.2019 um 07:05 schrieb Eyal Shenitzky:

Hi Jonathan,

Can you please add the engine and VDSM logs?

Thanks,


Hi Eyal,

my last message had the engine.log in a zip included.

Here are both again, but I delete some lines to get it smaller.




On Sun, Apr 14, 2019 at 12:24 AM Jonathan Baecker
mailto:jonba...@gmail.com>> wrote:

Hello,

I make automatically backups of my VMs and last night there
was making
some new one. But somehow ovirt could not delete the
snapshots anymore,
in the log it show that it tried the hole day to delete them
but they
had to wait until the merge command was done.

In the evening the host was totally crashed and started
again. Now I can
not delete the snapshots manually and I can also not start
the VMs
anymore. In the web interface I get the message:

VM timetrack is down with error. Exit message: Bad volume
specification
{'address': {'bus': '0', 'controller': '0', 'type': 'drive',
'target':
'0', 'unit': '0'}, 'serial':
'fd3b80fd-49ad-44ac-9efd-1328300582cd',
'index': 0, 'iface': 'scsi', 'apparentsize': '1572864',
'specParams':
{}, 'cache': 'none', 'imageID':
'fd3b80fd-49ad-44ac-9efd-1328300582cd',
'truesize': '229888', 'type': 'disk', 'domainID':
'9c3f06cf-7475-448e-819b-f4f52fa7d782', 'reqsize': '0',
'format': 'cow',
'poolID': '59ef3a18-002f-02d1-0220-0124', 'device':
'disk',
'path':

'/rhev/data-center/59ef3a18-002f-02d1-0220-0124/9c3f06cf-7475-448e-819b-f4f52fa7d782/images/fd3b80fd-49ad-44ac-9efd-1328300582cd/47c0f42e-8bda-4e3f-8337-870899238788',

'propagateErrors': 'off', 'name': 'sda', 'bootOrder': '1',
'volumeID':
'47c0f42e-8bda-4e3f-8337-870899238788', 'diskType': 'file',
'alias':
'ua-fd3b80fd-49ad-44ac-9efd-1328300582cd', 'discard': False}.

When I check the path permission is correct and there are
also files in it.

Is there any ways to fix that? Or to prevent this issue in
the future?

In the attachment I send also the engine.log


Regards

Jonathan




___
Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-le...@ovirt.org
<mailto:users-le...@ovirt.org>
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/XLHPEKGQWTVFJCHPJUC3WOXH525SWLEC/



-- 
Regards,

Eyal Shenitzky





--
Regards,
Eyal Shenitzky



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6CXCNVDHJG4YUQFEIUJHVEPTWSJLZKQ5/


[ovirt-users] Re: VM Snapshots not erasable and not bootable

2019-04-14 Thread Jonathan Baecker

Am 14.04.2019 um 13:57 schrieb Eyal Shenitzky:



On Sun, Apr 14, 2019 at 2:28 PM Jonathan Baecker <mailto:jonba...@gmail.com>> wrote:


Am 14.04.2019 um 12:13 schrieb Eyal Shenitzky:

Seems like your SPM went down while you had running Live merge
operation.

Can you please submit a bug and attach the logs?


Yes I can do - but you really think this is a bug? Because in that
time I had only one host running, so this was the SPM. And the
time in the log is exactly this time when the host was restarting.
But the merging jobs and snapshot deleting was starting ~20 hours
before.

We should investigate and see if there is a bug or not.
I overview the logs and saw some NPE that might suggest that there may 
be a bug here.
Please attach all the logs including the beginning of the snapshot 
deletion.



Ok, I did:

https://bugzilla.redhat.com/show_bug.cgi?id=1699627

The logs are also in full length.





On Sun, Apr 14, 2019 at 9:40 AM Jonathan Baecker
mailto:jonba...@gmail.com>> wrote:

Am 14.04.2019 um 07:05 schrieb Eyal Shenitzky:

Hi Jonathan,

Can you please add the engine and VDSM logs?

Thanks,


Hi Eyal,

my last message had the engine.log in a zip included.

Here are both again, but I delete some lines to get it smaller.




On Sun, Apr 14, 2019 at 12:24 AM Jonathan Baecker
mailto:jonba...@gmail.com>> wrote:

Hello,

I make automatically backups of my VMs and last night
there was making
some new one. But somehow ovirt could not delete the
snapshots anymore,
in the log it show that it tried the hole day to delete
them but they
had to wait until the merge command was done.

In the evening the host was totally crashed and started
again. Now I can
not delete the snapshots manually and I can also not
start the VMs
anymore. In the web interface I get the message:

VM timetrack is down with error. Exit message: Bad
volume specification
{'address': {'bus': '0', 'controller': '0', 'type':
'drive', 'target':
'0', 'unit': '0'}, 'serial':
'fd3b80fd-49ad-44ac-9efd-1328300582cd',
'index': 0, 'iface': 'scsi', 'apparentsize': '1572864',
'specParams':
{}, 'cache': 'none', 'imageID':
'fd3b80fd-49ad-44ac-9efd-1328300582cd',
'truesize': '229888', 'type': 'disk', 'domainID':
'9c3f06cf-7475-448e-819b-f4f52fa7d782', 'reqsize': '0',
'format': 'cow',
'poolID': '59ef3a18-002f-02d1-0220-0124',
'device': 'disk',
'path':

'/rhev/data-center/59ef3a18-002f-02d1-0220-0124/9c3f06cf-7475-448e-819b-f4f52fa7d782/images/fd3b80fd-49ad-44ac-9efd-1328300582cd/47c0f42e-8bda-4e3f-8337-870899238788',

'propagateErrors': 'off', 'name': 'sda', 'bootOrder':
'1', 'volumeID':
'47c0f42e-8bda-4e3f-8337-870899238788', 'diskType':
'file', 'alias':
'ua-fd3b80fd-49ad-44ac-9efd-1328300582cd', 'discard':
False}.

When I check the path permission is correct and there
are also files in it.

Is there any ways to fix that? Or to prevent this issue
in the future?

In the attachment I send also the engine.log


Regards

Jonathan




___
Users mailing list -- users@ovirt.org
<mailto:users@ovirt.org>
To unsubscribe send an email to users-le...@ovirt.org
<mailto:users-le...@ovirt.org>
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/XLHPEKGQWTVFJCHPJUC3WOXH525SWLEC/



-- 
Regards,

Eyal Shenitzky





-- 
Regards,

Eyal Shenitzky





--
Regards,
Eyal Shenitzky



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AIQBUFHJOGJ3GXZLZSBOXIO2IBWL7UU4/


[ovirt-users] Re: VM Snapshots not erasable and not bootable

2019-04-20 Thread Jonathan Baecker

Am 14.04.2019 um 14:01 schrieb Jonathan Baecker:

Am 14.04.2019 um 13:57 schrieb Eyal Shenitzky:



On Sun, Apr 14, 2019 at 2:28 PM Jonathan Baecker <mailto:jonba...@gmail.com>> wrote:


Am 14.04.2019 um 12:13 schrieb Eyal Shenitzky:

Seems like your SPM went down while you had running Live merge
operation.

Can you please submit a bug and attach the logs?


Yes I can do - but you really think this is a bug? Because in
that time I had only one host running, so this was the SPM. And
the time in the log is exactly this time when the host was
restarting. But the merging jobs and snapshot deleting was
starting ~20 hours before.

We should investigate and see if there is a bug or not.
I overview the logs and saw some NPE that might suggest that there 
may be a bug here.
Please attach all the logs including the beginning of the snapshot 
deletion.



Ok, I did:

https://bugzilla.redhat.com/show_bug.cgi?id=1699627

The logs are also in full length.

Now I have the same issue, that my host trying to delete the snapshots. 
It is still running, no reboot until now. But is there anything I can do?


I'm happy that the backup before was made correctly, other while I would 
be in big trouble. But it looks like that I can not make any more normal 
backup jobs.








On Sun, Apr 14, 2019 at 9:40 AM Jonathan Baecker
mailto:jonba...@gmail.com>> wrote:

Am 14.04.2019 um 07:05 schrieb Eyal Shenitzky:

Hi Jonathan,

Can you please add the engine and VDSM logs?

Thanks,


Hi Eyal,

my last message had the engine.log in a zip included.

Here are both again, but I delete some lines to get it smaller.




On Sun, Apr 14, 2019 at 12:24 AM Jonathan Baecker
mailto:jonba...@gmail.com>> wrote:

Hello,

I make automatically backups of my VMs and last night
there was making
some new one. But somehow ovirt could not delete the
snapshots anymore,
in the log it show that it tried the hole day to delete
them but they
had to wait until the merge command was done.

In the evening the host was totally crashed and started
again. Now I can
not delete the snapshots manually and I can also not
start the VMs
anymore. In the web interface I get the message:

VM timetrack is down with error. Exit message: Bad
volume specification
{'address': {'bus': '0', 'controller': '0', 'type':
'drive', 'target':
'0', 'unit': '0'}, 'serial':
'fd3b80fd-49ad-44ac-9efd-1328300582cd',
'index': 0, 'iface': 'scsi', 'apparentsize': '1572864',
'specParams':
{}, 'cache': 'none', 'imageID':
'fd3b80fd-49ad-44ac-9efd-1328300582cd',
'truesize': '229888', 'type': 'disk', 'domainID':
'9c3f06cf-7475-448e-819b-f4f52fa7d782', 'reqsize': '0',
'format': 'cow',
'poolID': '59ef3a18-002f-02d1-0220-0124',
'device': 'disk',
'path':

'/rhev/data-center/59ef3a18-002f-02d1-0220-0124/9c3f06cf-7475-448e-819b-f4f52fa7d782/images/fd3b80fd-49ad-44ac-9efd-1328300582cd/47c0f42e-8bda-4e3f-8337-870899238788',

'propagateErrors': 'off', 'name': 'sda', 'bootOrder':
'1', 'volumeID':
'47c0f42e-8bda-4e3f-8337-870899238788', 'diskType':
'file', 'alias':
'ua-fd3b80fd-49ad-44ac-9efd-1328300582cd', 'discard':
False}.

When I check the path permission is correct and there
are also files in it.

Is there any ways to fix that? Or to prevent this issue
in the future?

In the attachment I send also the engine.log


Regards

Jonathan




___
Users mailing list -- users@ovirt.org
<mailto:users@ovirt.org>
To unsubscribe send an email to users-le...@ovirt.org
<mailto:users-le...@ovirt.org>
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lis

[ovirt-users] Re: VM Snapshots not erasable and not bootable

2019-04-20 Thread Jonathan Baecker

Am 20.04.2019 um 20:38 schrieb Jonathan Baecker:

Am 14.04.2019 um 14:01 schrieb Jonathan Baecker:

Am 14.04.2019 um 13:57 schrieb Eyal Shenitzky:



On Sun, Apr 14, 2019 at 2:28 PM Jonathan Baecker <mailto:jonba...@gmail.com>> wrote:


Am 14.04.2019 um 12:13 schrieb Eyal Shenitzky:

Seems like your SPM went down while you had running Live merge
operation.

Can you please submit a bug and attach the logs?


Yes I can do - but you really think this is a bug? Because in
that time I had only one host running, so this was the SPM. And
the time in the log is exactly this time when the host was
restarting. But the merging jobs and snapshot deleting was
starting ~20 hours before.

We should investigate and see if there is a bug or not.
I overview the logs and saw some NPE that might suggest that there 
may be a bug here.
Please attach all the logs including the beginning of the snapshot 
deletion.



Ok, I did:

https://bugzilla.redhat.com/show_bug.cgi?id=1699627

The logs are also in full length.

Now I have the same issue, that my host trying to delete the 
snapshots. It is still running, no reboot until now. But is there 
anything I can do?


I'm happy that the backup before was made correctly, other while I 
would be in big trouble. But it looks like that I can not make any 
more normal backup jobs.


Ok here is a interesting situation. I starting to shutdown my VM. fist 
this ones which had no snapshots deleting running. The also VMs which 
are in process, and now all deleting jobs finished successfully. Can it 
be, that the host and VM are not communicating correctly, and somehow 
this brings the host in a situation that it can not merge and delete a 
created snapshot? From some VMs I also get the waring that I need a 
newer ovirt-guest-agent, but there is no updates for it.








On Sun, Apr 14, 2019 at 9:40 AM Jonathan Baecker
mailto:jonba...@gmail.com>> wrote:

Am 14.04.2019 um 07:05 schrieb Eyal Shenitzky:

Hi Jonathan,

Can you please add the engine and VDSM logs?

Thanks,


Hi Eyal,

my last message had the engine.log in a zip included.

Here are both again, but I delete some lines to get it smaller.




On Sun, Apr 14, 2019 at 12:24 AM Jonathan Baecker
mailto:jonba...@gmail.com>> wrote:

Hello,

I make automatically backups of my VMs and last night
there was making
some new one. But somehow ovirt could not delete the
snapshots anymore,
in the log it show that it tried the hole day to
delete them but they
had to wait until the merge command was done.

In the evening the host was totally crashed and
started again. Now I can
not delete the snapshots manually and I can also not
start the VMs
anymore. In the web interface I get the message:

VM timetrack is down with error. Exit message: Bad
volume specification
{'address': {'bus': '0', 'controller': '0', 'type':
'drive', 'target':
'0', 'unit': '0'}, 'serial':
'fd3b80fd-49ad-44ac-9efd-1328300582cd',
'index': 0, 'iface': 'scsi', 'apparentsize':
'1572864', 'specParams':
{}, 'cache': 'none', 'imageID':
'fd3b80fd-49ad-44ac-9efd-1328300582cd',
'truesize': '229888', 'type': 'disk', 'domainID':
'9c3f06cf-7475-448e-819b-f4f52fa7d782', 'reqsize':
'0', 'format': 'cow',
'poolID': '59ef3a18-002f-02d1-0220-0124',
'device': 'disk',
'path':

'/rhev/data-center/59ef3a18-002f-02d1-0220-0124/9c3f06cf-7475-448e-819b-f4f52fa7d782/images/fd3b80fd-49ad-44ac-9efd-1328300582cd/47c0f42e-8bda-4e3f-8337-870899238788',

'propagateErrors': 'off', 'name': 'sda', 'bootOrder':
'1', 'volumeID':
'47c0f42e-8bda-4e3f-8337-870899238788', 'diskType':
'file', 'alias':
'ua-fd3b80fd-49ad-44ac-9efd-1328300582cd', 'discard':
False}.

When I check the path permission is correct and there
are also files in it.

Is there any ways to fix that? Or to prevent this
issue in the future?

In the attachment I send also the engine.log



[ovirt-users] VMs losing network interfaces

2022-02-20 Thread Jonathan Baecker

Hello everybody,

I have here a strange behavior: We have a 3 node self hosted cluster 
with around 20 VMs running on it. Since longer I had the problem with 
one VM that after some days it lose the network interface. But because 
this VM was only for testing I was to lazy to dive more in, to figure 
out what is happen.


Now I have a second VM, with the same problem and this VM is more 
important. Both VMs running debian 10 and use cifs mounts, so maybe that 
is related?


Have some one of you seeing this behavior? And can give me a hint, how I 
can fix this?


At the moment I can't provide a log file, because I didn't know the 
exact time, when this was happen. And I also don't know, if the problem 
comes from ovirt or from the operating system inside the VMs.


Have a nice day!

Jonathan

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DOXZRQ55LFPNKUVS3AWIPXQDJIVH3X7M/


[ovirt-users] Re: Deleting Snapshot failed

2022-03-02 Thread Jonathan Baecker
I though that to, but I think is more. When I run: vdsm-tool 
dump-volume-chains "sd_id" I get:


image:    ad23c0db-1838-4f1f-811b-2b213d3a11cd

 - 15259a3b-1065-4fb7-bc3c-04c5f4e14479
   status: OK, voltype: INTERNAL, format: COW, legality: 
LEGAL, type: SPARSE, capacity: 21474836480, truesize: 6279929856


 - 024e1844-c19b-40d8-a2ac-cb4ea6ec34e6
   status: OK, voltype: LEAF, format: COW, legality: 
ILLEGAL, type: SPARSE, capacity: 21474836480, truesize: 1302528


15259a3b-1065-4fb7-bc3c-04c5f4e14479 is the snapshot.


Am 26.02.22 um 21:50 schrieb Joseph Goldman:
Sounds to me as though it has completed the disk delete underneath but 
it has remained in the database.


I've had similar issues with it also locking the images on the VM and 
making me worried it can not be started up should it shut down. 
Unfortunately the best fix I've found on a production kit is to try 
and simply restore from latest backups as a new VM and delete the old 
one. I managed in one or 2 cases to 'clone' the VM to create the new one.


The other solution 'may' be to just delete it from the database, how 
to do so cleanly im not sure, but you'd also have to make sure the 
disk chain etc is in-tact and matches what is now in the engine 
database. It gets really messy really quickly when this kind of thing 
happens and has caused me a good amount of stress before :P


-- Original Message --
From: "Jonathan Baecker" 
To: "users" 
Sent: 27/02/2022 5:53:38 AM
Subject: [ovirt-users] Deleting Snapshot failed


Hello everybody,

last night my backup script was not able to finish the backup from a 
VM in the last step of deleting the snapshot. And now I also can not 
delete this snapshot by hand, the message says:


VDSM onode2 command MergeVDS failed: Drive image file could not
be found: {'driveSpec': {'poolID':
'c9baa5d4-3543-11eb-9c0c-00163e33f845', 'volumeID':
'024e1844-c19b-40d8-a2ac-cb4ea6ec34e6', 'imageID':
'ad23c0db-1838-4f1f-811b-2b213d3a11cd', 'domainID':
'3cf83851-1cc8-4f97-8960-08a60b9e25db'}, 'job':
'96c7003f-e111-4270-b922-d9b215aaaea2', 'reason': 'Cannot find
drive'}

The full log you found in the attachment.

Any idea?

Best regard

Jonathan



___
Users mailing list --users@ovirt.org
To unsubscribe send an email tousers-le...@ovirt.org
Privacy Statement:https://www.ovirt.org/privacy-policy.html
oVirt Code of 
Conduct:https://www.ovirt.org/community/about/community-guidelines/
List 
Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/TNHVBOGIPGPQWI6T6XNOVKVO4HQFNGX7/___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/42GH42EGYCIU5V54MXYNUYAW6OZHSJEU/


[ovirt-users] Re: storage high latency, sanlock errors, cluster instability

2022-05-29 Thread Jonathan Baecker

Am 29.05.22 um 19:24 schrieb Nir Soffer:

On Sun, May 29, 2022 at 7:50 PM Jonathan Baecker  wrote:

Hello everybody,

we run a 3 node self hosted cluster with GlusterFS. I had a lot of problem 
upgrading ovirt from 4.4.10 to 4.5.0.2 and now we have cluster instability.

First I will write down the problems I had with upgrading, so you get a bigger 
picture:

engine update when fine
But nodes I could not update because of wrong version of imgbase, so I did a 
manual update to 4.5.0.1 and later to 4.5.0.2. First time after updating it was 
still booting into 4.4.10, so I did a reinstall.
Then after second reboot I ended up in the emergency mode. After a long 
searching I figure out that lvm.conf using use_devicesfile now but there it 
uses the wrong filters. So I comment out this and add the old filters back. 
This procedure I have done on all 3 nodes.

When use_devicesfile (default in 4.5) is enabled, lvm filter is not
used. During installation
the old lvm filter is removed.

Can you share more info on why it does not work for you?
The problem was, that the node could not mount the gluster volumes 
anymore and ended up in emergency mode.

- output of lsblk


   NAME MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
   sda 8:0    0   1.8T  0 disk
   `-XA1920LE10063_HKS028AV 253:0    0   1.8T  0 mpath
  |-gluster_vg_sda-gluster_thinpool_gluster_vg_sda_tmeta 253:16  
   0 9G  0 lvm
  | `-gluster_vg_sda-gluster_thinpool_gluster_vg_sda-tpool 253:18  
   0   1.7T  0 lvm
  |   |-gluster_vg_sda-gluster_thinpool_gluster_vg_sda 253:19   0  
   1.7T  1 lvm
  |   |-gluster_vg_sda-gluster_lv_data 253:20   0   100G  0 lvm  
   /gluster_bricks/data
  |   `-gluster_vg_sda-gluster_lv_vmstore 253:21   0   1.6T  0
   lvm   /gluster_bricks/vmstore
  `-gluster_vg_sda-gluster_thinpool_gluster_vg_sda_tdata 253:17  
   0   1.7T  0 lvm
    `-gluster_vg_sda-gluster_thinpool_gluster_vg_sda-tpool 253:18  
   0   1.7T  0 lvm
  |-gluster_vg_sda-gluster_thinpool_gluster_vg_sda 253:19   0  
   1.7T  1 lvm
  |-gluster_vg_sda-gluster_lv_data 253:20   0   100G  0 lvm  
   /gluster_bricks/data
  `-gluster_vg_sda-gluster_lv_vmstore 253:21   0   1.6T  0
   lvm   /gluster_bricks/vmstore
   sr0 11:0    1  1024M  0 rom
   nvme0n1 259:0    0 238.5G  0 disk
   |-nvme0n1p1 259:1    0 1G  0 part  /boot
   |-nvme0n1p2 259:2    0   134G  0 part
   | |-onn-pool00_tmeta 253:1    0 1G  0 lvm
   | | `-onn-pool00-tpool 253:3    0    87G  0 lvm
   | |   |-onn-ovirt--node--ng--4.5.0.2--0.20220513.0+1 253:4    0   
   50G  0 lvm   /
   | |   |-onn-pool00 253:7    0    87G  1 lvm
   | |   |-onn-home 253:8    0 1G  0 lvm   /home
   | |   |-onn-tmp 253:9    0 1G  0 lvm   /tmp
   | |   |-onn-var 253:10   0    15G  0 lvm   /var
   | |   |-onn-var_crash 253:11   0    10G  0 lvm   /var/crash
   | |   |-onn-var_log 253:12   0 8G  0 lvm   /var/log
   | |   |-onn-var_log_audit 253:13   0 2G  0 lvm   /var/log/audit
   | |   |-onn-ovirt--node--ng--4.5.0.1--0.20220511.0+1 253:14   0   
   50G  0 lvm
   | |   `-onn-var_tmp 253:15   0    10G  0 lvm   /var/tmp
   | |-onn-pool00_tdata 253:2    0    87G  0 lvm
   | | `-onn-pool00-tpool 253:3    0    87G  0 lvm
   | |   |-onn-ovirt--node--ng--4.5.0.2--0.20220513.0+1 253:4    0   
   50G  0 lvm   /
   | |   |-onn-pool00 253:7    0    87G  1 lvm
   | |   |-onn-home 253:8    0 1G  0 lvm   /home
   | |   |-onn-tmp 253:9    0 1G  0 lvm   /tmp
   | |   |-onn-var 253:10   0    15G  0 lvm   /var
   | |   |-onn-var_crash 253:11   0    10G  0 lvm   /var/crash
   | |   |-onn-var_log 253:12   0 8G  0 lvm   /var/log
   | |   |-onn-var_log_audit 253:13   0 2G  0 lvm   /var/log/audit
   | |   |-onn-ovirt--node--ng--4.5.0.1--0.20220511.0+1 253:14   0   
   50G  0 lvm
   | |   `-onn-var_tmp 253:15   0    10G  0 lvm   /var/tmp
   | `-onn-swap 253:5    0    20G  0 lvm   [SWAP]
   `-nvme0n1p3 259:3    0    95G  0 part
  `-gluster_vg_nvme0n1p3-gluster_lv_engine 253:6    0    94G  0
   lvm   /gluster_bricks/engine


- The old lvm filter used, and why it was needed


   filter =
   ["a|^/dev/disk/by-id/lvm-pv-uuid-Nn7tZl-TFdY-BujO-VZG5-EaGW-5YFd-Lo5pwa$|",
   "a|^/dev/disk/by-id/lvm-pv-uuid-Wcbxnx-2RhC-s1Re-s148-nLj9-Tr3f-jj4VvE$|",
   "a|^/dev/disk/by-id/lvm-pv-uuid-lX51wm-H7V4-3CTn-qYob-Rkpx-Tptd-t94jNL$|",
   "r|.*|"]

I don't remember exactly any more why it was needed, but without the 
node was not working correctly. I think I even used vdsm-tool 
config-lvm-filter.

- output of vdsm-tool config-lvm-filter


   Analyzing host...
   Found these mounted logical volumes on this host:

  logical volume: /dev/mapper/gluster_vg_nvme0n1p3-gluster_lv_engine
  mountpoint:  /gluster_bricks/engine
  devices: /dev/nvme0n1p3

  logical volume:  /dev/mapper/gluster_vg_sda-gluster_lv_data
  mountpoint:  /gluster_bricks/data
  devices: /dev/mapper/XA1920LE10063_HKS028AV


[ovirt-users] Re: storage high latency, sanlock errors, cluster instability

2022-05-29 Thread Jonathan Baecker

Am 29.05.22 um 20:26 schrieb Nir Soffer:

On Sun, May 29, 2022 at 9:03 PM Jonathan Baecker  wrote:

Am 29.05.22 um 19:24 schrieb Nir Soffer:

On Sun, May 29, 2022 at 7:50 PM Jonathan Baecker  wrote:

Hello everybody,

we run a 3 node self hosted cluster with GlusterFS. I had a lot of problem 
upgrading ovirt from 4.4.10 to 4.5.0.2 and now we have cluster instability.

First I will write down the problems I had with upgrading, so you get a bigger 
picture:

engine update when fine
But nodes I could not update because of wrong version of imgbase, so I did a 
manual update to 4.5.0.1 and later to 4.5.0.2. First time after updating it was 
still booting into 4.4.10, so I did a reinstall.
Then after second reboot I ended up in the emergency mode. After a long 
searching I figure out that lvm.conf using use_devicesfile now but there it 
uses the wrong filters. So I comment out this and add the old filters back. 
This procedure I have done on all 3 nodes.

When use_devicesfile (default in 4.5) is enabled, lvm filter is not
used. During installation
the old lvm filter is removed.

Can you share more info on why it does not work for you?

The problem was, that the node could not mount the gluster volumes anymore and 
ended up in emergency mode.

- output of lsblk

NAME   MAJ:MIN RM   SIZE RO 
TYPE  MOUNTPOINT
sda  8:00   1.8T  0 
disk
`-XA1920LE10063_HKS028AV   253:00   1.8T  0 
mpath
   |-gluster_vg_sda-gluster_thinpool_gluster_vg_sda_tmeta   253:16   0 9G  
0 lvm
   | `-gluster_vg_sda-gluster_thinpool_gluster_vg_sda-tpool 253:18   0   1.7T  
0 lvm
   |   |-gluster_vg_sda-gluster_thinpool_gluster_vg_sda 253:19   0   1.7T  
1 lvm
   |   |-gluster_vg_sda-gluster_lv_data 253:20   0   100G  
0 lvm   /gluster_bricks/data
   |   `-gluster_vg_sda-gluster_lv_vmstore  253:21   0   1.6T  
0 lvm   /gluster_bricks/vmstore
   `-gluster_vg_sda-gluster_thinpool_gluster_vg_sda_tdata   253:17   0   1.7T  
0 lvm
 `-gluster_vg_sda-gluster_thinpool_gluster_vg_sda-tpool 253:18   0   1.7T  
0 lvm
   |-gluster_vg_sda-gluster_thinpool_gluster_vg_sda 253:19   0   1.7T  
1 lvm
   |-gluster_vg_sda-gluster_lv_data 253:20   0   100G  
0 lvm   /gluster_bricks/data
   `-gluster_vg_sda-gluster_lv_vmstore  253:21   0   1.6T  
0 lvm   /gluster_bricks/vmstore
sr0 11:01  1024M  0 
rom
nvme0n1259:00 238.5G  0 
disk
|-nvme0n1p1259:10 1G  0 
part  /boot
|-nvme0n1p2259:20   134G  0 
part
| |-onn-pool00_tmeta   253:10 1G  0 
lvm
| | `-onn-pool00-tpool 253:3087G  0 
lvm
| |   |-onn-ovirt--node--ng--4.5.0.2--0.20220513.0+1   253:4050G  0 
lvm   /
| |   |-onn-pool00 253:7087G  1 
lvm
| |   |-onn-home   253:80 1G  0 
lvm   /home
| |   |-onn-tmp253:90 1G  0 
lvm   /tmp
| |   |-onn-var253:10   015G  0 
lvm   /var
| |   |-onn-var_crash  253:11   010G  0 
lvm   /var/crash
| |   |-onn-var_log253:12   0 8G  0 
lvm   /var/log
| |   |-onn-var_log_audit  253:13   0 2G  0 
lvm   /var/log/audit
| |   |-onn-ovirt--node--ng--4.5.0.1--0.20220511.0+1   253:14   050G  0 
lvm
| |   `-onn-var_tmp253:15   010G  0 
lvm   /var/tmp
| |-onn-pool00_tdata   253:2087G  0 
lvm
| | `-onn-pool00-tpool 253:3087G  0 
lvm
| |   |-onn-ovirt--node--ng--4.5.0.2--0.20220513.0+1   253:4050G  0 
lvm   /
| |   |-onn-pool00 253:7087G  1 
lvm
| |   |-onn-home   253:80 1G  0 
lvm   /home
| |   |-onn-tmp253:90 1G  0 
lvm   /tmp
| |   |-onn-var253:10   015G  0 
lvm   /var
| |   |-onn-var_crash  253:11   010G  0 
lvm   /var/crash
| |   |-onn-var_log253:12   0 8G  0 
lvm   /var/log
| |   |-onn-var_log_audit  253:13   0 2G  0 
lvm   /var/log/audit
| |   |-onn-ovirt--node--ng--4.5.0.1--0.20220511.0+1   253:14   050G  0 
lvm
| |   `-onn-var_tmp