[ovirt-users] Re: Ovirt vs lvm?

2020-04-26 Thread Nyika Csaba
 Hi,

Thanks you for your kind detailed anwers.
You helpd me a lot.

Now i hope we can solved the problem.

Special thanks to Gianluca too.

csabany 
 Eredeti levél 
Feladó: Nir Soffer < nsof...@redhat.com (Link -> mailto:nsof...@redhat.com) >
Dátum: 2020 április 26 17:39:36
Tárgy: [ovirt-users] Re: Ovirt vs lvm?
Címzett: Nyika Csaba < csab...@freemail.hu (Link -> mailto:csab...@freemail.hu) 
>
On Sun, Apr 26, 2020 at 3:00 PM Nyika Csaba  wrote:
>
>
>  Eredeti levél 
> Feladó: Gianluca Cecchi < gianluca.cec...@gmail.com (Link -> 
> mailto:gianluca.cec...@gmail.com) >
> Dátum: 2020 április 26 11:42:40
> Tárgy: Re: [ovirt-users] Re: Ovirt vs lvm?
> Címzett: Nyika Csaba < csab...@freemail.hu (Link -> 
> mailto:csab...@freemail.hu) >
>
> On Sun, Apr 26, 2020 at 11:06 AM Nyika Csaba < csab...@freemail.hu (Link -> 
> mailto:csab...@freemail.hu) > wrote:
>
> Thanks the advice.
> The hypervisors are "fresh". But the management server arrived from version 
> 3.6 step-by-step (We use this ovirt since 2015).
> The issuse occured diffrent clusters, hosts, diffrent HV versions. For 
> example the last but one vm occured on a ibm x3650, ovirt-node v4.2 host and 
> the last on a lenovo, ovirt-node v4.3.
> Best
>
>
> In theory on hypervisor node the only VG listed should be something like onn 
> (like Ovirt Node New generation, I think)
>
> In my case I have also gluster volumes, but in your case with FC SAN you 
> should only have onn
>
> [root@ovirt ~]# vgs
> VG #PV #LV #SN Attr VSize VFree
> gluster_vg_4t 1 2 0 wz--n- <3.64t 0
> gluster_vg_4t2 1 2 0 wz--n- <3.64t 0
> gluster_vg_nvme0n1 1 3 0 wz--n- 349.32g 0
> gluster_vg_nvme1n1 1 2 0 wz--n- 931.51g 0
> onn 1 11 0 wz--n- <228.40g <43.87g
> [root@ovirt ~]#
>
> And also the command "lvs" should so show only onn related logical volumes...
>
> Gianluca
>
> Hi,
>
> I checked all nodes, and what i got back after vgs command literally 
> "unbelievable".
>
> Some host look like good :
> VG #PV #LV #SN Attr VSize VFree
> 003b6a83-9133-4e65-9d6d-878d08e0de06 1 25 0 wz--n- <50,00t <44,86t
> 0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8 1 50 0 wz--n- <20,00t 4,57t
> 1831603c-e583-412a-b20e-f97b31ad9a55 1 122 0 wz--n- <25,00t <6,79t
> 3ff15d64-a716-4fad-94f0-abb69b5643a7 1 64 0 wz--n- <17,31t <4,09t
> 424fc43f-6bbf-47bb-94a0-b4c3322a4a90 1 68 0 wz--n- <14,46t <1,83t
> 4752cc9d-5f19-4cb1-b116-a62e3ee05783 1 81 0 wz--n- <28,00t <4,91t
> 567a63ec-5b34-425c-af20-5997450cf061 1 110 0 wz--n- <17,00t <2,21t
> 5f6dcc41-9a2f-432f-9de0-bed541cd6a03 1 71 0 wz--n- <20,00t <2,35t
> 8a4e4463-0945-430e-affd-c7ac2bbdc912 1 86 0 wz--n- <13,01t 2,85t
> c9543c8d-c6da-44be-8060-179e807f1211 1 55 0 wz--n- <18,00t 5,22t
> d5679d9d-ebf2-41ef-9e93-83d2cd9b027c 1 67 0 wz--n- <7,20t <1,15t
No this is not good - these are VGs on shared storage, and the host
should not be able to access them.
> onn 1 11 0 wz--n- 277,46g 54,60g
I this guest VG (created inside the guest)? If so this is bad.
> Others:
> VG #PV #LV #SN Attr VSize VFree
> 003b6a83-9133-4e65-9d6d-878d08e0de06 1 25 0 wz--n- <50,00t <44,86t
> 0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8 1 50 0 wz--n- <20,00t 4,57t
> 1831603c-e583-412a-b20e-f97b31ad9a55 1 122 0 wz--n- <25,00t <6,79t
> 3ff15d64-a716-4fad-94f0-abb69b5643a7 1 64 0 wz--n- <17,31t <4,09t
> 424fc43f-6bbf-47bb-94a0-b4c3322a4a90 1 68 0 wz--n- <14,46t <1,83t
> 4752cc9d-5f19-4cb1-b116-a62e3ee05783 1 81 0 wz--n- <28,00t <4,91t
> 567a63ec-5b34-425c-af20-5997450cf061 1 110 0 wz--n- <17,00t <2,21t
> 5f6dcc41-9a2f-432f-9de0-bed541cd6a03 1 71 0 wz--n- <20,00t <2,35t
> 8a4e4463-0945-430e-affd-c7ac2bbdc912 1 86 0 wz--n- <13,01t 2,85t
> c9543c8d-c6da-44be-8060-179e807f1211 1 55 0 wz--n- <18,00t 5,22t
> d5679d9d-ebf2-41ef-9e93-83d2cd9b027c 1 67 0 wz--n- <7,20t <1,15t
Again, bad.
> onn 1 11 0 wz--n- 277,46g 54,60g
> vg_okosvaros 2 7 0 wz-pn- <77,20g 0
Bad if this guest VGs.
> Others:
> VG #PV #LV #SN Attr VSize VFree
> 003b6a83-9133-4e65-9d6d-878d08e0de06 1 25 0 wz--n- <50,00t <44,86t
> 0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8 1 50 0 wz--n- <20,00t 4,57t
> 1831603c-e583-412a-b20e-f97b31ad9a55 1 122 0 wz--n- <25,00t <6,79t
> 3ff15d64-a716-4fad-94f0-abb69b5643a7 1 64 0 wz--n- <17,31t <4,09t
> 424fc43f-6bbf-47bb-94a0-b4c3322a4a90 1 68 0 wz--n- <14,46t <1,83t
> 4752cc9d-5f19-4cb1-b116-a62e3ee05783 1 81 0 wz--n- <28,00t <4,91t
> 567a63ec-5b34-425c-af20-5997450cf061 1 110 0 wz--n- <17,00t <2,21t
> 5f6dcc41-9a2f-432f-9de0-bed541cd6a03 1 71 0 wz--n- <20,00t <2,35t
> 8a4e4463-0945-430e-affd-c7ac2bb

[ovirt-users] Re: Ovirt vs lvm?

2020-04-26 Thread Nir Soffer
On Sun, Apr 26, 2020 at 3:00 PM Nyika Csaba  wrote:
>
>
>  Eredeti levél 
> Feladó: Gianluca Cecchi < gianluca.cec...@gmail.com (Link -> 
> mailto:gianluca.cec...@gmail.com) >
> Dátum: 2020 április 26 11:42:40
> Tárgy: Re: [ovirt-users] Re: Ovirt vs lvm?
> Címzett: Nyika Csaba < csab...@freemail.hu (Link -> 
> mailto:csab...@freemail.hu) >
>
> On Sun, Apr 26, 2020 at 11:06 AM Nyika Csaba < csab...@freemail.hu (Link -> 
> mailto:csab...@freemail.hu) > wrote:
>
> Thanks the advice.
> The hypervisors are "fresh". But the management server arrived from version 
> 3.6 step-by-step (We use this ovirt since 2015).
> The issuse occured diffrent clusters, hosts, diffrent HV versions. For 
> example the last but one vm occured on a ibm x3650, ovirt-node v4.2 host and 
> the last on  a lenovo, ovirt-node v4.3.
> Best
>
>
> In theory on hypervisor node the only VG listed should be something like onn 
> (like Ovirt Node New generation, I think)
>
> In my case I have also gluster volumes, but in your case with FC SAN you 
> should only have onn
>
> [root@ovirt ~]# vgs
>   VG #PV #LV #SN Attr   VSizeVFree
>   gluster_vg_4t1   2   0 wz--n-   <3.64t  0
>   gluster_vg_4t2   1   2   0 wz--n-   <3.64t  0
>   gluster_vg_nvme0n1   1   3   0 wz--n-  349.32g  0
>   gluster_vg_nvme1n1   1   2   0 wz--n-  931.51g  0
>   onn  1  11   0 wz--n- <228.40g <43.87g
> [root@ovirt ~]#
>
> And also the command "lvs" should so show only onn related logical volumes...
>
> Gianluca
>
>  Hi,
>
> I checked all nodes, and what i got back after vgs command literally 
> "unbelievable".
>
> Some host look like good :
>   VG   #PV #LV #SN Attr   VSize   VFree
>   003b6a83-9133-4e65-9d6d-878d08e0de06   1  25   0 wz--n- <50,00t <44,86t
>   0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8   1  50   0 wz--n- <20,00t   4,57t
>   1831603c-e583-412a-b20e-f97b31ad9a55   1 122   0 wz--n- <25,00t  <6,79t
>   3ff15d64-a716-4fad-94f0-abb69b5643a7   1  64   0 wz--n- <17,31t  <4,09t
>   424fc43f-6bbf-47bb-94a0-b4c3322a4a90   1  68   0 wz--n- <14,46t  <1,83t
>   4752cc9d-5f19-4cb1-b116-a62e3ee05783   1  81   0 wz--n- <28,00t  <4,91t
>   567a63ec-5b34-425c-af20-5997450cf061   1 110   0 wz--n- <17,00t  <2,21t
>   5f6dcc41-9a2f-432f-9de0-bed541cd6a03   1  71   0 wz--n- <20,00t  <2,35t
>   8a4e4463-0945-430e-affd-c7ac2bbdc912   1  86   0 wz--n- <13,01t   2,85t
>   c9543c8d-c6da-44be-8060-179e807f1211   1  55   0 wz--n- <18,00t   5,22t
>   d5679d9d-ebf2-41ef-9e93-83d2cd9b027c   1  67   0 wz--n-  <7,20t  <1,15t

No this is not good - these are VGs on shared storage, and the host
should not be able to access them.

>   onn1  11   0 wz--n- 277,46g  54,60g

I this guest VG (created inside the guest)? If so this is bad.

> Others:
>   VG   #PV #LV #SN Attr   VSize   VFree
>   003b6a83-9133-4e65-9d6d-878d08e0de06   1  25   0 wz--n- <50,00t <44,86t
>   0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8   1  50   0 wz--n- <20,00t   4,57t
>   1831603c-e583-412a-b20e-f97b31ad9a55   1 122   0 wz--n- <25,00t  <6,79t
>   3ff15d64-a716-4fad-94f0-abb69b5643a7   1  64   0 wz--n- <17,31t  <4,09t
>   424fc43f-6bbf-47bb-94a0-b4c3322a4a90   1  68   0 wz--n- <14,46t  <1,83t
>   4752cc9d-5f19-4cb1-b116-a62e3ee05783   1  81   0 wz--n- <28,00t  <4,91t
>   567a63ec-5b34-425c-af20-5997450cf061   1 110   0 wz--n- <17,00t  <2,21t
>   5f6dcc41-9a2f-432f-9de0-bed541cd6a03   1  71   0 wz--n- <20,00t  <2,35t
>   8a4e4463-0945-430e-affd-c7ac2bbdc912   1  86   0 wz--n- <13,01t   2,85t
>   c9543c8d-c6da-44be-8060-179e807f1211   1  55   0 wz--n- <18,00t   5,22t
>   d5679d9d-ebf2-41ef-9e93-83d2cd9b027c   1  67   0 wz--n-  <7,20t  <1,15t

Again, bad.

>   onn1  11   0 wz--n- 277,46g  54,60g
>   vg_okosvaros   2   7   0 wz-pn- <77,20g  0

Bad if this guest VGs.

> Others:
>   VG   #PV #LV #SN Attr   VSizeVFree
>   003b6a83-9133-4e65-9d6d-878d08e0de06   1  25   0 wz--n-  <50,00t <44,86t
>   0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8   1  50   0 wz--n-  <20,00t   4,57t
>   1831603c-e583-412a-b20e-f97b31ad9a55   1 122   0 wz--n-  <25,00t  <6,79t
>   3ff15d64-a716-4fad-94f0-abb69b5643a7   1  64   0 wz--n-  <17,31t  <4,09t
>   424fc43f-6bbf-47bb-94a0-b4c3322a4a90   1  68   0 wz--n-  <14,46t  <1,83t
>   4752cc9d-5f19-4cb1-b116-a62e3ee05783   1  81   0 wz--n-  <28,00t  <4,91t
>   5

[ovirt-users] Re: Ovirt vs lvm?

2020-04-26 Thread Strahil Nikolov
On April 26, 2020 4:30:33 PM GMT+03:00, Gianluca Cecchi 
 wrote:
>On Sun, Apr 26, 2020 at 2:00 PM Nyika Csaba 
>wrote:
>
>>
>> -[snip]
>>
>
>
>> In theory on hypervisor node the only VG listed should be something
>like
>> onn (like Ovirt Node New generation, I think)
>>
>> In my case I have also gluster volumes, but in your case with FC SAN
>you
>> should only have onn
>>
>> [root@ovirt ~]# vgs
>>   VG #PV #LV #SN Attr   VSizeVFree
>>   gluster_vg_4t1   2   0 wz--n-   <3.64t  0
>>   gluster_vg_4t2   1   2   0 wz--n-   <3.64t  0
>>   gluster_vg_nvme0n1   1   3   0 wz--n-  349.32g  0
>>   gluster_vg_nvme1n1   1   2   0 wz--n-  931.51g  0
>>   onn  1  11   0 wz--n- <228.40g <43.87g
>> [root@ovirt ~]#
>>
>> And also the command "lvs" should so show only onn related logical
>> volumes...
>>
>> Gianluca
>>
>>  Hi,
>>
>> I checked all nodes, and what i got back after vgs command literally
>> "unbelievable".
>>
>>
>Ok, so this is your problem.
>And the main bugzilla opened by great guy Germano from Red Hat support
>at
>time of RHV 3.6 when I first opened a case on it was this:
>https://bugzilla.redhat.com/show_bug.cgi?id=1374545
>
>If I remember correctly, you will see the problem only if inside VM you
>configured a PV for the whole virtual disk (and not its partitions) and
>if
>the disk of the VM was configured as preallocated.
>
>I have not at hand now the detailed information to solve, but for sure
>you
>will have to modify your LVM filters and rebuild initramfs of nodes and
>reboot, one by one.
>Inside the bugzilla there were a script for LVM filtering and there is
>also
>this page for oVirt:
>
>https://blogs.ovirt.org/2017/12/lvm-configuration-the-easy-way/
>
>Quite new installations should prevent problems, in my opinion, but you
>could be impacted by wrong configurations transported during upgrades.
>
>Gianluca

I wonder if you also have issues with live migration of VMs between hosts.
Have you noticed anything like that so far?
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2H3HFEWM2CKCTVTA7IPQGEXRHHA56HJT/


[ovirt-users] Re: Ovirt vs lvm?

2020-04-26 Thread Gianluca Cecchi
On Sun, Apr 26, 2020 at 2:00 PM Nyika Csaba  wrote:

>
> -[snip]
>


> In theory on hypervisor node the only VG listed should be something like
> onn (like Ovirt Node New generation, I think)
>
> In my case I have also gluster volumes, but in your case with FC SAN you
> should only have onn
>
> [root@ovirt ~]# vgs
>   VG #PV #LV #SN Attr   VSizeVFree
>   gluster_vg_4t1   2   0 wz--n-   <3.64t  0
>   gluster_vg_4t2   1   2   0 wz--n-   <3.64t  0
>   gluster_vg_nvme0n1   1   3   0 wz--n-  349.32g  0
>   gluster_vg_nvme1n1   1   2   0 wz--n-  931.51g  0
>   onn  1  11   0 wz--n- <228.40g <43.87g
> [root@ovirt ~]#
>
> And also the command "lvs" should so show only onn related logical
> volumes...
>
> Gianluca
>
>  Hi,
>
> I checked all nodes, and what i got back after vgs command literally
> "unbelievable".
>
>
Ok, so this is your problem.
And the main bugzilla opened by great guy Germano from Red Hat support at
time of RHV 3.6 when I first opened a case on it was this:
https://bugzilla.redhat.com/show_bug.cgi?id=1374545

If I remember correctly, you will see the problem only if inside VM you
configured a PV for the whole virtual disk (and not its partitions) and if
the disk of the VM was configured as preallocated.

I have not at hand now the detailed information to solve, but for sure you
will have to modify your LVM filters and rebuild initramfs of nodes and
reboot, one by one.
Inside the bugzilla there were a script for LVM filtering and there is also
this page for oVirt:

https://blogs.ovirt.org/2017/12/lvm-configuration-the-easy-way/

Quite new installations should prevent problems, in my opinion, but you
could be impacted by wrong configurations transported during upgrades.

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PFRAYL6TOMXHQRNSCRLCD2DSNGXZTFDT/


[ovirt-users] Re: Ovirt vs lvm?

2020-04-26 Thread Nyika Csaba
 
 Eredeti levél 
Feladó: Gianluca Cecchi < gianluca.cec...@gmail.com (Link -> 
mailto:gianluca.cec...@gmail.com) >
Dátum: 2020 április 26 11:42:40
Tárgy: Re: [ovirt-users] Re: Ovirt vs lvm?
Címzett: Nyika Csaba < csab...@freemail.hu (Link -> mailto:csab...@freemail.hu) 
>
 
On Sun, Apr 26, 2020 at 11:06 AM Nyika Csaba < csab...@freemail.hu (Link -> 
mailto:csab...@freemail.hu) > wrote:
 
Thanks the advice.
The hypervisors are "fresh". But the management server arrived from version 3.6 
step-by-step (We use this ovirt since 2015).
The issuse occured diffrent clusters, hosts, diffrent HV versions. For example 
the last but one vm occured on a ibm x3650, ovirt-node v4.2 host and the last 
on  a lenovo, ovirt-node v4.3.
Best
 
 
In theory on hypervisor node the only VG listed should be something like onn 
(like Ovirt Node New generation, I think)
 
In my case I have also gluster volumes, but in your case with FC SAN you should 
only have onn
 
[root@ovirt ~]# vgs
  VG                 #PV #LV #SN Attr   VSize    VFree  
  gluster_vg_4t        1   2   0 wz--n-   <3.64t      0
  gluster_vg_4t2       1   2   0 wz--n-   <3.64t      0
  gluster_vg_nvme0n1   1   3   0 wz--n-  349.32g      0
  gluster_vg_nvme1n1   1   2   0 wz--n-  931.51g      0
  onn                  1  11   0 wz--n- <228.40g <43.87g
[root@ovirt ~]#
 
And also the command "lvs" should so show only onn related logical volumes...
 
Gianluca
 
 Hi,

I checked all nodes, and what i got back after vgs command literally 
"unbelievable".

Some host look like good :
  VG   #PV #LV #SN Attr   VSize   VFree  
  003b6a83-9133-4e65-9d6d-878d08e0de06   1  25   0 wz--n- <50,00t <44,86t
  0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8   1  50   0 wz--n- <20,00t   4,57t
  1831603c-e583-412a-b20e-f97b31ad9a55   1 122   0 wz--n- <25,00t  <6,79t
  3ff15d64-a716-4fad-94f0-abb69b5643a7   1  64   0 wz--n- <17,31t  <4,09t
  424fc43f-6bbf-47bb-94a0-b4c3322a4a90   1  68   0 wz--n- <14,46t  <1,83t
  4752cc9d-5f19-4cb1-b116-a62e3ee05783   1  81   0 wz--n- <28,00t  <4,91t
  567a63ec-5b34-425c-af20-5997450cf061   1 110   0 wz--n- <17,00t  <2,21t
  5f6dcc41-9a2f-432f-9de0-bed541cd6a03   1  71   0 wz--n- <20,00t  <2,35t
  8a4e4463-0945-430e-affd-c7ac2bbdc912   1  86   0 wz--n- <13,01t   2,85t
  c9543c8d-c6da-44be-8060-179e807f1211   1  55   0 wz--n- <18,00t   5,22t
  d5679d9d-ebf2-41ef-9e93-83d2cd9b027c   1  67   0 wz--n-  <7,20t  <1,15t
  onn    1  11   0 wz--n- 277,46g  54,60g

Others:
  VG   #PV #LV #SN Attr   VSize   VFree  
  003b6a83-9133-4e65-9d6d-878d08e0de06   1  25   0 wz--n- <50,00t <44,86t
  0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8   1  50   0 wz--n- <20,00t   4,57t
  1831603c-e583-412a-b20e-f97b31ad9a55   1 122   0 wz--n- <25,00t  <6,79t
  3ff15d64-a716-4fad-94f0-abb69b5643a7   1  64   0 wz--n- <17,31t  <4,09t
  424fc43f-6bbf-47bb-94a0-b4c3322a4a90   1  68   0 wz--n- <14,46t  <1,83t
  4752cc9d-5f19-4cb1-b116-a62e3ee05783   1  81   0 wz--n- <28,00t  <4,91t
  567a63ec-5b34-425c-af20-5997450cf061   1 110   0 wz--n- <17,00t  <2,21t
  5f6dcc41-9a2f-432f-9de0-bed541cd6a03   1  71   0 wz--n- <20,00t  <2,35t
  8a4e4463-0945-430e-affd-c7ac2bbdc912   1  86   0 wz--n- <13,01t   2,85t
  c9543c8d-c6da-44be-8060-179e807f1211   1  55   0 wz--n- <18,00t   5,22t
  d5679d9d-ebf2-41ef-9e93-83d2cd9b027c   1  67   0 wz--n-  <7,20t  <1,15t
  onn    1  11   0 wz--n- 277,46g  54,60g
  vg_okosvaros   2   7   0 wz-pn- <77,20g  0

Others:
  VG   #PV #LV #SN Attr   VSize    VFree  
  003b6a83-9133-4e65-9d6d-878d08e0de06   1  25   0 wz--n-  <50,00t <44,86t
  0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8   1  50   0 wz--n-  <20,00t   4,57t
  1831603c-e583-412a-b20e-f97b31ad9a55   1 122   0 wz--n-  <25,00t  <6,79t
  3ff15d64-a716-4fad-94f0-abb69b5643a7   1  64   0 wz--n-  <17,31t  <4,09t
  424fc43f-6bbf-47bb-94a0-b4c3322a4a90   1  68   0 wz--n-  <14,46t  <1,83t
  4752cc9d-5f19-4cb1-b116-a62e3ee05783   1  81   0 wz--n-  <28,00t  <4,91t
  567a63ec-5b34-425c-af20-5997450cf061   1 110   0 wz--n-  <17,00t  <2,21t
  5f6dcc41-9a2f-432f-9de0-bed541cd6a03   1  71   0 wz--n-  <20,00t  <2,35t
  8a4e4463-0945-430e-affd-c7ac2bbdc912   1  86   0 wz--n-  <13,01t   2,85t
  c9543c8d-c6da-44be-8060-179e807f1211   1  55   0 wz--n-  <18,00t   5,22t
  d5679d9d-ebf2-41ef-9e93-83d2cd9b027c   1  67   0 wz--n-   <7,20t  <1,15t
  onn    1  13   0 wz--n- <446,07g  88,39g
  vg_4trdb1p 3   7   0 wz-pn-  157,19g  0
  vg_4trdb1t 3   7   0 wz-pn-  157,19g  0
  vg_deployconfigrepo    3   7   0

[ovirt-users] Re: Ovirt vs lvm?

2020-04-26 Thread Gianluca Cecchi
On Sun, Apr 26, 2020 at 11:06 AM Nyika Csaba  wrote:

>
>
>
> Thanks the advice.
>
> The hypervisors are "fresh". But the management server arrived from
> version 3.6 step-by-step (We use this ovirt since 2015).
>
> The issuse occured diffrent clusters, hosts, diffrent HV versions. For
> example the last but one vm occured on a ibm x3650, ovirt-node v4.2 host
> and the last on  a lenovo, ovirt-node v4.3.
>
> Best
>
>
In theory on hypervisor node the only VG listed should be something like
onn (like Ovirt Node New generation, I think)

In my case I have also gluster volumes, but in your case with FC SAN you
should only have onn

[root@ovirt ~]# vgs
  VG #PV #LV #SN Attr   VSizeVFree
  gluster_vg_4t1   2   0 wz--n-   <3.64t  0
  gluster_vg_4t2   1   2   0 wz--n-   <3.64t  0
  gluster_vg_nvme0n1   1   3   0 wz--n-  349.32g  0
  gluster_vg_nvme1n1   1   2   0 wz--n-  931.51g  0
  onn  1  11   0 wz--n- <228.40g <43.87g
[root@ovirt ~]#

And also the command "lvs" should so show only onn related logical
volumes...

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/W6YV72INALG6JSQ4FCPZHWLAU6ERFR4P/


[ovirt-users] Re: Ovirt vs lvm?

2020-04-26 Thread Nyika Csaba
 
 Eredeti levél 
Feladó: Gianluca Cecchi < gianluca.cec...@gmail.com (Link -> 
mailto:gianluca.cec...@gmail.com) >
Dátum: 2020 április 26 10:01:27
Tárgy: Re: [ovirt-users] Ovirt vs lvm?
Címzett: csab...@freemail.hu (Link -> mailto:csab...@freemail.hu)
 
On Sat, Apr 25, 2020 at 10:08 PM < csab...@freemail.hu (Link -> 
mailto:csab...@freemail.hu) > wrote:
Hi,
Our production ovirt system looks like: standalone management server, vesion 
4.3.9, 6 clusters, 28 nodes (v4.2, v4.3) , one storage domain, (FC SAN 
Storages), centos7 vm-s , and some windows vms.
I have a returning problem. Sometime when i power off a vm and power on again , 
i get an error message our linux vm (when we use lvm of course): dracut: 
Read-only locking type set. Write locks are prohibited., dracut: Can't get lock 
for vg.
I can repair only 70% of damaged vm.
I tried to localize the problem, but a can`t. The error occured randomly every 
cluster, every storage on last 2 years.
Has anyone ever encountered such a problem?
 
 
I think one possible reason could be hypervisor not correctly masking LVM at VM 
disk level.
There was a bug in the past about this.
Is this a fresh install or arriving from previous versions?
 
Anyway verify on all your hypervisors what is the output of the command "vgs" 
and be sure that you only see volume groups related to hypervisors themselves 
and not inner VMs.
If you have a subset of VMs with the problem, identify if that happens only on 
particular clusters/hosts, so that you can narrow the analysis to these 
hypervisors.
 
HIH,
Gianluca

Thanks the advice.

The hypervisors are "fresh". But the management server arrived from version 3.6 
step-by-step (We use this ovirt since 2015).

The issuse occured diffrent clusters, hosts, diffrent HV versions. For example 
the last but one vm occured on a ibm x3650, ovirt-node v4.2 host and the last 
on  a lenovo, ovirt-node v4.3.

Best
csabany___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JS6MSHKGBJGQZ3ZE4CO7BE2GQGKOTRUH/


[ovirt-users] Re: Ovirt vs lvm?

2020-04-26 Thread Gianluca Cecchi
On Sat, Apr 25, 2020 at 10:08 PM  wrote:

> Hi,
>
> Our production ovirt system looks like: standalone management server,
> vesion 4.3.9, 6 clusters, 28 nodes (v4.2, v4.3) , one storage domain, (FC
> SAN Storages), centos7 vm-s , and some windows vms.
> I have a returning problem. Sometime when i power off a vm and power on
> again , i get an error message our linux vm (when we use lvm of course):
> dracut: Read-only locking type set. Write locks are prohibited., dracut:
> Can't get lock for vg.
> I can repair only 70% of damaged vm.
> I tried to localize the problem, but a can`t. The error occured randomly
> every cluster, every storage on last 2 years.
> Has anyone ever encountered such a problem?
>
>
I think one possible reason could be hypervisor not correctly masking LVM
at VM disk level.
There was a bug in the past about this.
Is this a fresh install or arriving from previous versions?

Anyway verify on all your hypervisors what is the output of the command
"vgs" and be sure that you only see volume groups related to hypervisors
themselves and not inner VMs.
If you have a subset of VMs with the problem, identify if that happens only
on particular clusters/hosts, so that you can narrow the analysis to these
hypervisors.

HIH,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7LQEMMNTNDIMQ4E7NESYK4QUTKCCNDEP/


[ovirt-users] Re: Ovirt vs lvm?

2020-04-26 Thread Nyika Csaba
 Eredeti levél 
Feladó: Strahil Nikolov < hunter86...@yahoo.com (Link -> 
mailto:hunter86...@yahoo.com) >
Dátum: 2020 április 26 07:57:43
Tárgy: Re: [ovirt-users] Ovirt vs lvm?
Címzett: csab...@freemail.hu (Link -> mailto:csab...@freemail.hu)
On April 25, 2020 11:07:23 PM GMT+03:00, csab...@freemail.hu wrote:
>Hi,
>
>Our production ovirt system looks like: standalone management server,
>vesion 4.3.9, 6 clusters, 28 nodes (v4.2, v4.3) , one storage domain,
>(FC SAN Storages), centos7 vm-s , and some windows vms.
>I have a returning problem. Sometime when i power off a vm and power on
>again , i get an error message our linux vm (when we use lvm of
>course): dracut: Read-only locking type set. Write locks are
>prohibited., dracut: Can't get lock for vg.
>I can repair only 70% of damaged vm.
>I tried to localize the problem, but a can`t. The error occured
>randomly every cluster, every storage on last 2 years.
>Has anyone ever encountered such a problem?
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/AHKCLRZUWO4UVCFRBXN6GETS4M46A2FQ/
I haven't seen such issue so far, but I can only recommend yoiu to clone such 
VM next time, so you can try to figure out what is going on.
During the repair, have you tried rebuilding the initramfs after the issue 
happens ?
Best Regards,
Strahil Nikolov

Thanks for advice!

Definitely yes, i make a new initramfs by dracut.
When this error occure, the locking_type parameter in darcut lvm.conf file 
changed to 4.
I write back to 1: lvm vhchange -ay --config ' global {locking_type=1} '
then write back the locking_type in /etc/lvm/lvm.conf to 1.
Then exit and (if i have a lucky day) the dracut -v -f build's a new initramfs 
and the vm works fine.

The issue appear's in couple, so if i found a vm for this "error" one in the 
others running vms has this error too.

csabany___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JPJLMNBHBZLBQ7DYCHV7DB53QTE3XBOO/


[ovirt-users] Re: Ovirt vs lvm?

2020-04-26 Thread Strahil Nikolov
On April 25, 2020 11:07:23 PM GMT+03:00, csab...@freemail.hu wrote:
>Hi,
>
>Our production ovirt system looks like: standalone management server,
>vesion 4.3.9, 6 clusters, 28 nodes (v4.2, v4.3) , one storage domain,
>(FC SAN Storages), centos7 vm-s , and some windows vms.
>I have a returning problem. Sometime when i power off a vm and power on
>again , i get an error message our linux vm (when we use lvm of
>course): dracut: Read-only locking type set. Write locks are
>prohibited., dracut: Can't get lock for vg.
>I can repair only 70% of damaged vm.
>I tried to localize the problem, but a can`t. The error occured
>randomly every cluster, every storage on last 2 years.
>Has anyone ever encountered such a problem?
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/AHKCLRZUWO4UVCFRBXN6GETS4M46A2FQ/

I haven't seen such issue so far,  but I can only recommend yoiu to clone such 
VM next time, so you can try to figure out what is going on.
During the repair, have you tried rebuilding the initramfs after the issue 
happens ?

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LS7BHJSPEXUVZCICMSSRH5UM4GWNDLPY/