[ovirt-users] Re: [ANN] Schedule for oVirt 4.5.0

2022-02-23 Thread Sandro Bonazzola
Il giorno mer 23 feb 2022 alle ore 19:23 Gilboa Davara 
ha scritto:

> On Wed, Feb 23, 2022 at 12:46 PM Sandro Bonazzola 
> wrote:
>
>> Il giorno mer 23 feb 2022 alle ore 11:36 Gilboa Davara
>>  ha scritto:
>> >
>> > Hello,
>> >
>> > Gluster is still mentioned in the release page.
>> > Will it be supported as a storage backend in 4.5?
>>
>>
>> As RHGS is going end of life in 2024 it is being deprecated for RHV.
>> The upstream Gluster project has no plan for going end of life as far
>> as I know so there is no reason to remove the possibility of using
>> gluster as storage backend in oVirt.
>> There's no plan to completely remove support for Gluster as a storage
>> backend.
>>
>
> Manys thanks for the prompt response.
> Does it include hosted engine storage domain support (read: hosted-engine
> --deploy support)?
>

Yes, no plan to remove the code for handling gluster on hosted-engine
--deploy.



>
> - Gilboa
>
>
>>
>> >
>> >
>> > - Gilboa
>> >
>> >
>> > On Tue, Feb 22, 2022 at 4:57 PM Sandro Bonazzola 
>> wrote:
>> >>
>> >> The oVirt development team leads are pleased to inform that the
>> >> schedule for oVirt 4.5.0 has been finalized.
>> >>
>> >> The key dates follows:
>> >>
>> >> * Feature Freeze - String Freeze - Alpha release: 2022-03-15
>> >> * Alpha release test day: 2022-03-17
>> >> * Code freeze - Beta release: 2022-03-29
>> >> * Beta release test day: 2022-03-31
>> >> * General Availability release: 2022-04-12
>> >>
>> >> A release management draft page has been created at:
>> >> https://www.ovirt.org/release/4.5.0/
>> >>
>> >> If you're willing to help testing the release during the test days
>> >> please join the oVirt development mailing list at
>> >> https://lists.ovirt.org/archives/list/de...@ovirt.org/ and report your
>> >> feedback there.
>> >> Instructions for installing oVirt 4.5.0 Alpha and oVirt 4.5.0 Beta for
>> >> testing will be added to the release page
>> >> https://www.ovirt.org/release/4.5.0/ when the corresponding version
>> >> will be released.
>> >>
>> >> Professional Services, Integrators and Backup vendors: please plan a
>> >> test session against your additional services, integrated solutions,
>> >> downstream rebuilds, backup solution accordingly.
>> >> If you're not listed here:
>> >> https://ovirt.org/community/user-stories/users-and-providers.html
>> >> consider adding your company there.
>> >>
>> >> If you're willing to help updating the localization for oVirt 4.5.0
>> >> please follow https://ovirt.org/develop/localization.html
>> >>
>> >> If you're willing to help promoting the oVirt 4.5.0 release you can
>> >> submit your banner proposals for the oVirt home page and for the
>> >> social media advertising at https://github.com/oVirt/ovirt-site/issues
>> >> As an alternative please consider submitting a case study as in
>> >> https://ovirt.org/community/user-stories/user-stories.html
>> >>
>> >> Feature owners: please start planning a presentation of your feature
>> >> for oVirt Youtube channel: https://www.youtube.com/c/ovirtproject
>> >>
>> >> Do you want to contribute to getting ready for this release?
>> >> Read more about oVirt community at https://ovirt.org/community/ and
>> >> join the oVirt developers https://ovirt.org/develop/
>> >>
>> >> Thanks,
>> >> --
>> >>
>> >> Sandro Bonazzola
>> >> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>> >> Red Hat EMEA
>> >> sbona...@redhat.com
>> >> Red Hat respects your work life balance. Therefore there is no need to
>> >> answer this email out of your office hours.
>> >> ___
>> >> Users mailing list -- users@ovirt.org
>> >> To unsubscribe send an email to users-le...@ovirt.org
>> >> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> >> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> >> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/7646LEQIHL76HIJTAZWCXWAHT3M6V47C/
>>
>>
>>
>> --
>>
>> Sandro Bonazzola
>> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>> Red Hat EMEA
>> sbona...@redhat.com
>> Red Hat respects your work life balance. Therefore there is no need to
>> answer this email out of your office hours.
>>
>>

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PCM5WTBANQQ47VY5QD427OBKRCXZCSRP/


[ovirt-users] Re: Migrate Hosted-Engine from one NFS mount to another NFS mount

2022-02-23 Thread matthew.st...@fujitsu.com
I’ve always been told that migrating self-hosted-engine storage was a backup, 
shutdown, and rebuild from backup procedure.

In my iscsi environment it has never worked.  (More due to the history of my 
environment, than the procedure itself.)

Since we have too many hosts in a datacenter, we’ve choose to go another 
method, attrition.

I’m splitting of clusters into their own datacenters.  Once I’m down to the 
last host, running the original  SHE, I will simply power it down, reload it, 
and re-add it to one of my new datacenters.


From: Brian Ismay 
Sent: Wednesday, February 23, 2022 4:52 PM
To: users@ovirt.org
Subject: [ovirt-users] Migrate Hosted-Engine from one NFS mount to another NFS 
mount

Hello,

I am currently running Ovirt 4.2.5.3-1.el7 with an Hosted-Engine running on 
nfs-based storage.

I am needing to migrate the backend NFS store for the Hosted-Engine to a new 
share.

Is there specific documentation for this procedure? I see multiple references 
to a general procedure involving backing-up the existing install and then 
re-installing and restoring the backup. Is there any way to make a more clean 
change of the configuration?

Thanks,
Brian Ismay

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/U64NCDIXONK3R2F2UL4YISGAEQRDBK5W/


[ovirt-users] Migrate Hosted-Engine from one NFS mount to another NFS mount

2022-02-23 Thread Brian Ismay
Hello,

I am currently running Ovirt 4.2.5.3-1.el7 with an Hosted-Engine running on
nfs-based storage.

I am needing to migrate the backend NFS store for the Hosted-Engine to a
new share.

Is there specific documentation for this procedure? I see multiple
references to a general procedure involving backing-up the existing install
and then re-installing and restoring the backup. Is there any way to make a
more clean change of the configuration?

Thanks,
Brian Ismay
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DGSVBFXU6HTTVNLYRJ6HSATTPSLBNGMO/


[ovirt-users] Re: Gluster Performance issues

2022-02-23 Thread Strahil Nikolov via Users
You can try to play a little bit with the I/O threads (but don't jump too fast).
What is your I/O scheduler and mount options.You can reduce I/O lookups if you 
specify the 'noatime' and the selinux context on the mount options.
A real killer of performance is the lattency. What is the lattency between all 
nodes ?

Best Regards,Strahil Nikolov
 
 
  On Wed, Feb 23, 2022 at 20:26, Alex Morrison wrote:  
 Hello All,
I believe the network is performing as expected, I did an iperf test:

[root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# iperf3 -c 10.10.1.2
Connecting to host 10.10.1.2, port 5201
[  5] local 10.10.1.1 port 38422 connected to 10.10.1.2 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  1.08 GBytes  9.24 Gbits/sec    0   2.96 MBytes
[  5]   1.00-2.00   sec  1.03 GBytes  8.81 Gbits/sec    0   2.96 MBytes
[  5]   2.00-3.00   sec  1006 MBytes  8.44 Gbits/sec  101   1.45 MBytes
[  5]   3.00-4.00   sec  1.04 GBytes  8.92 Gbits/sec    5    901 KBytes
[  5]   4.00-5.00   sec  1.05 GBytes  9.01 Gbits/sec    0    957 KBytes
[  5]   5.00-6.00   sec  1.08 GBytes  9.23 Gbits/sec    0    990 KBytes
[  5]   6.00-7.00   sec  1008 MBytes  8.46 Gbits/sec  159    655 KBytes
[  5]   7.00-8.00   sec  1.06 GBytes  9.11 Gbits/sec    0    970 KBytes
[  5]   8.00-9.00   sec  1.03 GBytes  8.85 Gbits/sec    2    829 KBytes
[  5]   9.00-10.00  sec  1.04 GBytes  8.96 Gbits/sec    0    947 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  10.4 GBytes  8.90 Gbits/sec  267             sender
[  5]   0.00-10.04  sec  10.4 GBytes  8.87 Gbits/sec                  receiver

iperf Done.

On Wed, Feb 23, 2022 at 11:45 AM Sunil Kumar Heggodu Gopala Acharya 
 wrote:


Regards,
Sunil


On Wed, Feb 23, 2022 at 7:34 PM Derek Atkins  wrote:

Have you verified that you're actually getting 10Gbps between the hosts?

-derek

On Wed, February 23, 2022 9:02 am, Alex Morrison wrote:
> Hello Derek,
>
> We have a 10Gig connection dedicated to the storage network, nothing else
> is on that switch.
>
> On Wed, Feb 23, 2022 at 9:49 AM Derek Atkins  wrote:
>
>> Hi,
>>
>> Another question which I don't see answered:   What is the underlying
>> connectivity between the Gluster hosts?
>>
>> -derek
>>
>> On Wed, February 23, 2022 8:39 am, Alex Morrison wrote:
>> > Hello Sunil,
>> >
>> > [root@ovirt1 ~]# gluster --version
>> > glusterfs 8.6
>> >
>> > same on all hosts

Latest 
Release-10.1(https://lists.gluster.org/pipermail/gluster-users/2022-February/039761.html)
 has some performance fixes which should help in this situation compared to the 
older gluster bits. 
>> >
>> > On Wed, Feb 23, 2022 at 5:24 AM Sunil Kumar Heggodu Gopala Acharya <
>> > shegg...@redhat.com> wrote:
>> >
>> >> Hi,
>> >>
>> >> Which version of gluster is in use?
>> >>
>> >> Regards,
>> >>
>> >> Sunil kumar Acharya
>> >>
>> >> Red Hat
>> >>
>> >> 
>> >>
>> >> T: +91-8067935170
>> >> 
>> >>
>> >> 
>> >> TRIED. TESTED. TRUSTED. 
>> >>
>> >>
>> >>
>> >> On Wed, Feb 23, 2022 at 2:17 PM Alex Morrison
>> 
>> >> wrote:
>> >>
>> >>> Hello All,
>> >>>
>> >>> We have 3 servers with a raid 50 array each, we are having extreme
>> >>> performance issues with our gluster, writes on gluster seem to take
>> at
>> >>> least 3 times longer than on the raid directly. Can this be
>> improved?
>> >>> I've
>> >>> read through several other performance issues threads but have been
>> >>> unable
>> >>> to make any improvements
>> >>>
>> >>> "gluster volume info" and "gluster volume profile vmstore info" is
>> >>> below
>> >>>
>> >>>
>> >>>
>> =
>> >>>
>> >>> -Inside Gluster - test took 35+ hours:
>> >>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# bonnie++ -d . -s
>> >>> 600G -n 0 -m TEST -f -b -u root
>> >>> Using uid:0, gid:0.
>> >>> Writing intelligently...done
>> >>> Rewriting...done
>> >>> Reading intelligently...done
>> >>> start 'em...done...done...done...done...done...
>> >>> Version  1.98       --Sequential Output-- --Sequential
>> Input-
>> >>> --Random-
>> >>>                     -Per Chr- --Block-- -Rewrite- -Per Chr-
>> --Block--
>> >>> --Seeks--
>> >>> Name:Size etc        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
>> %CP
>> >>>  /sec %CP
>> >>> TEST           600G           35.7m  17 5824k   7            112m
>> 13
>> >>> 182.7   6
>> >>> Latency                        5466ms   12754ms              3499ms
>> >>>  1589ms
>> >>>
>> >>>
>> >>>
>> 1.98,1.98,TEST,1,1644359706,600G,,8192,5,,,36598,17,5824,7,,,114950,13,182.7,6,,,5466ms,12754ms,,3499ms,1589ms,,
>> >>>
>> >>>
>> >>>
>> ==

[ovirt-users] Re: VMs losing network interfaces

2022-02-23 Thread Strahil Nikolov via Users
You can always create overrides like this:
/etc/systemd/system/.d/someconfname.conf[]= 
Best Regards,Strahil Nikolov
 
 
  On Wed, Feb 23, 2022 at 14:53, jb wrote:   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QB5GLYRBTVJE42O7QM2EARU2TRV52FUJ/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WINMMN6U3LEZRLQ34OMK4IGRTNE5B4M4/


[ovirt-users] Re: hosted engine deployment (v4.4.10) - TASK Check engine VM health - fatal FAILED

2022-02-23 Thread Strahil Nikolov via Users
Nope, it's in the appstream (CentOS Stream), but I never tested it.
Best Regards,Strahil Nikolov
 
 
  On Wed, Feb 23, 2022 at 12:42, Gilboa Davara wrote:   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZF3X276Y2WS34RVF7DZ3FLH5UCYUBDZN/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3IOUYPIOSLGDCTDE4JSGXYJUMYJCNJZU/


[ovirt-users] Re: [ANN] Schedule for oVirt 4.5.0

2022-02-23 Thread Gilboa Davara
On Wed, Feb 23, 2022 at 12:46 PM Sandro Bonazzola 
wrote:

> Il giorno mer 23 feb 2022 alle ore 11:36 Gilboa Davara
>  ha scritto:
> >
> > Hello,
> >
> > Gluster is still mentioned in the release page.
> > Will it be supported as a storage backend in 4.5?
>
>
> As RHGS is going end of life in 2024 it is being deprecated for RHV.
> The upstream Gluster project has no plan for going end of life as far
> as I know so there is no reason to remove the possibility of using
> gluster as storage backend in oVirt.
> There's no plan to completely remove support for Gluster as a storage
> backend.
>

Manys thanks for the prompt response.
Does it include hosted engine storage domain support (read: hosted-engine
--deploy support)?

- Gilboa


>
> >
> >
> > - Gilboa
> >
> >
> > On Tue, Feb 22, 2022 at 4:57 PM Sandro Bonazzola 
> wrote:
> >>
> >> The oVirt development team leads are pleased to inform that the
> >> schedule for oVirt 4.5.0 has been finalized.
> >>
> >> The key dates follows:
> >>
> >> * Feature Freeze - String Freeze - Alpha release: 2022-03-15
> >> * Alpha release test day: 2022-03-17
> >> * Code freeze - Beta release: 2022-03-29
> >> * Beta release test day: 2022-03-31
> >> * General Availability release: 2022-04-12
> >>
> >> A release management draft page has been created at:
> >> https://www.ovirt.org/release/4.5.0/
> >>
> >> If you're willing to help testing the release during the test days
> >> please join the oVirt development mailing list at
> >> https://lists.ovirt.org/archives/list/de...@ovirt.org/ and report your
> >> feedback there.
> >> Instructions for installing oVirt 4.5.0 Alpha and oVirt 4.5.0 Beta for
> >> testing will be added to the release page
> >> https://www.ovirt.org/release/4.5.0/ when the corresponding version
> >> will be released.
> >>
> >> Professional Services, Integrators and Backup vendors: please plan a
> >> test session against your additional services, integrated solutions,
> >> downstream rebuilds, backup solution accordingly.
> >> If you're not listed here:
> >> https://ovirt.org/community/user-stories/users-and-providers.html
> >> consider adding your company there.
> >>
> >> If you're willing to help updating the localization for oVirt 4.5.0
> >> please follow https://ovirt.org/develop/localization.html
> >>
> >> If you're willing to help promoting the oVirt 4.5.0 release you can
> >> submit your banner proposals for the oVirt home page and for the
> >> social media advertising at https://github.com/oVirt/ovirt-site/issues
> >> As an alternative please consider submitting a case study as in
> >> https://ovirt.org/community/user-stories/user-stories.html
> >>
> >> Feature owners: please start planning a presentation of your feature
> >> for oVirt Youtube channel: https://www.youtube.com/c/ovirtproject
> >>
> >> Do you want to contribute to getting ready for this release?
> >> Read more about oVirt community at https://ovirt.org/community/ and
> >> join the oVirt developers https://ovirt.org/develop/
> >>
> >> Thanks,
> >> --
> >>
> >> Sandro Bonazzola
> >> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
> >> Red Hat EMEA
> >> sbona...@redhat.com
> >> Red Hat respects your work life balance. Therefore there is no need to
> >> answer this email out of your office hours.
> >> ___
> >> Users mailing list -- users@ovirt.org
> >> To unsubscribe send an email to users-le...@ovirt.org
> >> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> >> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/7646LEQIHL76HIJTAZWCXWAHT3M6V47C/
>
>
>
> --
>
> Sandro Bonazzola
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
> Red Hat EMEA
> sbona...@redhat.com
> Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AF5TNUS3B4IY666QLXCS6242OLKU/


[ovirt-users] Re: Gluster Performance issues

2022-02-23 Thread Alex Morrison
Hello All,

I believe the network is performing as expected, I did an iperf test:

[root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# iperf3 -c 10.10.1.2
Connecting to host 10.10.1.2, port 5201
[  5] local 10.10.1.1 port 38422 connected to 10.10.1.2 port 5201
[ ID] Interval   Transfer Bitrate Retr  Cwnd
[  5]   0.00-1.00   sec  1.08 GBytes  9.24 Gbits/sec0   2.96 MBytes
[  5]   1.00-2.00   sec  1.03 GBytes  8.81 Gbits/sec0   2.96 MBytes
[  5]   2.00-3.00   sec  1006 MBytes  8.44 Gbits/sec  101   1.45 MBytes
[  5]   3.00-4.00   sec  1.04 GBytes  8.92 Gbits/sec5901 KBytes
[  5]   4.00-5.00   sec  1.05 GBytes  9.01 Gbits/sec0957 KBytes
[  5]   5.00-6.00   sec  1.08 GBytes  9.23 Gbits/sec0990 KBytes
[  5]   6.00-7.00   sec  1008 MBytes  8.46 Gbits/sec  159655 KBytes
[  5]   7.00-8.00   sec  1.06 GBytes  9.11 Gbits/sec0970 KBytes
[  5]   8.00-9.00   sec  1.03 GBytes  8.85 Gbits/sec2829 KBytes
[  5]   9.00-10.00  sec  1.04 GBytes  8.96 Gbits/sec0947 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval   Transfer Bitrate Retr
[  5]   0.00-10.00  sec  10.4 GBytes  8.90 Gbits/sec  267 sender
[  5]   0.00-10.04  sec  10.4 GBytes  8.87 Gbits/sec
 receiver

iperf Done.

On Wed, Feb 23, 2022 at 11:45 AM Sunil Kumar Heggodu Gopala Acharya <
shegg...@redhat.com> wrote:

>
> Regards,
> Sunil
>
>
> On Wed, Feb 23, 2022 at 7:34 PM Derek Atkins  wrote:
>
>> Have you verified that you're actually getting 10Gbps between the hosts?
>>
>> -derek
>>
>> On Wed, February 23, 2022 9:02 am, Alex Morrison wrote:
>> > Hello Derek,
>> >
>> > We have a 10Gig connection dedicated to the storage network, nothing
>> else
>> > is on that switch.
>> >
>> > On Wed, Feb 23, 2022 at 9:49 AM Derek Atkins  wrote:
>> >
>> >> Hi,
>> >>
>> >> Another question which I don't see answered:   What is the underlying
>> >> connectivity between the Gluster hosts?
>> >>
>> >> -derek
>> >>
>> >> On Wed, February 23, 2022 8:39 am, Alex Morrison wrote:
>> >> > Hello Sunil,
>> >> >
>> >> > [root@ovirt1 ~]# gluster --version
>> >> > glusterfs 8.6
>> >> >
>> >> > same on all hosts
>>
> Latest Release-10.1(
> https://lists.gluster.org/pipermail/gluster-users/2022-February/039761.html)
> has some performance fixes which should help in this situation compared to
> the older gluster bits.
>
>> >> >
>> >> > On Wed, Feb 23, 2022 at 5:24 AM Sunil Kumar Heggodu Gopala Acharya <
>> >> > shegg...@redhat.com> wrote:
>> >> >
>> >> >> Hi,
>> >> >>
>> >> >> Which version of gluster is in use?
>> >> >>
>> >> >> Regards,
>> >> >>
>> >> >> Sunil kumar Acharya
>> >> >>
>> >> >> Red Hat
>> >> >>
>> >> >> 
>> >> >>
>> >> >> T: +91-8067935170
>> >> >> 
>> >> >>
>> >> >> 
>> >> >> TRIED. TESTED. TRUSTED. 
>> >> >>
>> >> >>
>> >> >>
>> >> >> On Wed, Feb 23, 2022 at 2:17 PM Alex Morrison
>> >> 
>> >> >> wrote:
>> >> >>
>> >> >>> Hello All,
>> >> >>>
>> >> >>> We have 3 servers with a raid 50 array each, we are having extreme
>> >> >>> performance issues with our gluster, writes on gluster seem to take
>> >> at
>> >> >>> least 3 times longer than on the raid directly. Can this be
>> >> improved?
>> >> >>> I've
>> >> >>> read through several other performance issues threads but have been
>> >> >>> unable
>> >> >>> to make any improvements
>> >> >>>
>> >> >>> "gluster volume info" and "gluster volume profile vmstore info" is
>> >> >>> below
>> >> >>>
>> >> >>>
>> >> >>>
>> >>
>> =
>> >> >>>
>> >> >>> -Inside Gluster - test took 35+ hours:
>> >> >>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# bonnie++ -d .
>> -s
>> >> >>> 600G -n 0 -m TEST -f -b -u root
>> >> >>> Using uid:0, gid:0.
>> >> >>> Writing intelligently...done
>> >> >>> Rewriting...done
>> >> >>> Reading intelligently...done
>> >> >>> start 'em...done...done...done...done...done...
>> >> >>> Version  1.98   --Sequential Output-- --Sequential
>> >> Input-
>> >> >>> --Random-
>> >> >>> -Per Chr- --Block-- -Rewrite- -Per Chr-
>> >> --Block--
>> >> >>> --Seeks--
>> >> >>> Name:Size etc/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
>> >> %CP
>> >> >>>  /sec %CP
>> >> >>> TEST   600G   35.7m  17 5824k   7112m
>> >> 13
>> >> >>> 182.7   6
>> >> >>> Latency5466ms   12754ms  3499ms
>> >> >>>  1589ms
>> >> >>>
>> >> >>>
>> >> >>>
>> >>
>> 1.98,1.98,TEST,1,1644359706,600G,,8192,5,,,36598,17,5824,7,,,114950,13,182.7,6,,,5466ms,12754ms,,3499ms,1589ms,,
>> >> >>>
>> >> >>>
>> >> >>>
>> >>
>> =
>> >> >>>
>> >> >>> -Outside Glus

[ovirt-users] Re: [=EXTERNAL=] Re: help using nvme/tcp storage with cinderlib and Managed Block Storage

2022-02-23 Thread Muli Ben-Yehuda
Thanks for the detailed instructions, Nir. I'm going to scrounge up some
hardware.
By the way, if anyone else would like to work on NVMe/TCP support, for
NVMe/TCP target you can either use Lightbits (talk to me offline for
details) or use the upstream Linux NVMe/TCP target. Lightbits is a
clustered storage system while upstream is a single target, but the client
side should be close enough for vdsm/ovirt purposes.

Cheers,
Muli
--
Muli Ben-Yehuda
Co-Founder and Chief Scientist @ http://www.lightbitslabs.com
LightOS: The Special Storage Sauce For Your Cloud


On Wed, Feb 23, 2022 at 4:55 PM Nir Soffer  wrote:

> On Wed, Feb 23, 2022 at 4:20 PM Muli Ben-Yehuda 
> wrote:
> >
> > Thanks, Nir and Benny (nice to run into you again, Nir!). I'm a neophyte
> in ovirt and vdsm... What's the simplest way to set up a development
> environment? Is it possible to set up a "standalone" vdsm environment to
> hack support for nvme/tcp or do I need "full ovirt" to make it work?
>
> It should be possible to install vdsm on a single host or vm, and use vdsm
> API to bring the host to the right state, and then attach devices and run
> vms. But I don't know anyone that can pull this out since simulating what
> engine is doing is hard.
>
> So the best way is to set up at least one host and engine host using the
> latest 4.5 rpms, and continue from there. Once you have a host, building
> vdsm on the host and upgrading the rpms is pretty easy.
>
> My preferred setup is to create vms using virt-manager for hosts, engine
> and storage and run all the vms on my laptop.
>
> Note that you must have some traditional storage (NFS/iSCSI) to bring up
> the system even if you plan to use only managed block storage (MBS).
> Unfortunately when we add MBS support we did have time to fix the huge
> technical debt so you still need a master storage domain using one of the
> traditional legacy options.
>
> To build a setup, you can use:
>
> - engine vm: 6g ram, 2 cpus, centos stream 8
> - hosts vm: 4g ram, 2 cpus, centos stream 8
>   you can start with one host and add more hosts later if you want to
> test migration.
> - storage vm: 2g ram, 2 cpus, any os you like, I use alpine since it
> takes very little
>   memory and its NFS server is fast.
>
> See vdsm README for instructions how to setup a host:
> https://github.com/oVirt/vdsm#manual-installation
>
> For engine host you can follow:
>
> https://ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_using_the_command_line/#Enabling_the_Red_Hat_Virtualization_Manager_Repositories_install_RHVM
>
> And after that this should work:
>
> dnf install ovirt-engine
> engine-setup
>
> Accepting all the defaults should work.
>
> When you have engine running, you can add a new host with
> the ip address or dns name of you host(s) vm, and engine will
> do everything for you. Note that you must install the ovirt-release-master
> rpm on the host before you add it to engine.
>
> Nir
>
> >
> > Cheers,
> > Muli
> > --
> > Muli Ben-Yehuda
> > Co-Founder and Chief Scientist @ http://www.lightbitslabs.com
> > LightOS: The Special Storage Sauce For Your Cloud
> >
> >
> > On Wed, Feb 23, 2022 at 4:16 PM Nir Soffer  wrote:
> >>
> >> On Wed, Feb 23, 2022 at 2:48 PM Benny Zlotnik 
> wrote:
> >> >
> >> > So I started looking in the logs and tried to follow along with the
> >> > code, but things didn't make sense and then I saw it's ovirt 4.3 which
> >> > makes things more complicated :)
> >> > Unfortunately because GUID is sent in the metadata the volume is
> >> > treated as a vdsm managed volume[2] for the udev rule generation and
> >> > it prepends the /dev/mapper prefix to an empty string as a result.
> >> > I don't have the vdsm logs, so I am not sure where exactly this fails,
> >> > but if it's after [4] it may be possible to workaround it with a vdsm
> >> > hook
> >> >
> >> > In 4.4.6 we moved the udev rule triggering the volume mapping phase,
> >> > before starting the VM. But it could still not work because we check
> >> > the driver_volume_type in[1], and I saw it's "driver_volume_type":
> >> > "lightos" for lightbits
> >> > In theory it looks like it wouldn't take much to add support for your
> >> > driver in a future release (as it's pretty late for 4.5)
> >>
> >> Adding support for nvme/tcp in 4.3 is probably not feasible, but we will
> >> be happy to accept patches for 4.5.
> >>
> >> To debug such issues vdsm log is the best place to check. We should see
> >> the connection info passed to vdsm, and we have pretty simple code using
> >> it with os_brick to attach the device to the system and setting up the
> udev
> >> rule (which may need some tweaks).
> >>
> >> Nir
> >>
> >> > [1]
> https://github.com/oVirt/vdsm/blob/500c035903dd35180d71c97791e0ce4356fb77ad/lib/vdsm/storage/managedvolume.py#L110
> >> >
> >> > (4.3)
> >> > [2]
> https://github.com/oVirt/vdsm/blob/b42d4a816b538e00ea4955576a5fe762367be787/lib/vdsm/clientIF.py#L451
> >> > [3]
> https://github.com/oVirt/vdsm/blob/b42d4a816b53

[ovirt-users] Re: [=EXTERNAL=] Re: help using nvme/tcp storage with cinderlib and Managed Block Storage

2022-02-23 Thread Nir Soffer
On Wed, Feb 23, 2022 at 4:20 PM Muli Ben-Yehuda  wrote:
>
> Thanks, Nir and Benny (nice to run into you again, Nir!). I'm a neophyte in 
> ovirt and vdsm... What's the simplest way to set up a development 
> environment? Is it possible to set up a "standalone" vdsm environment to hack 
> support for nvme/tcp or do I need "full ovirt" to make it work?

It should be possible to install vdsm on a single host or vm, and use vdsm
API to bring the host to the right state, and then attach devices and run
vms. But I don't know anyone that can pull this out since simulating what
engine is doing is hard.

So the best way is to set up at least one host and engine host using the
latest 4.5 rpms, and continue from there. Once you have a host, building
vdsm on the host and upgrading the rpms is pretty easy.

My preferred setup is to create vms using virt-manager for hosts, engine
and storage and run all the vms on my laptop.

Note that you must have some traditional storage (NFS/iSCSI) to bring up
the system even if you plan to use only managed block storage (MBS).
Unfortunately when we add MBS support we did have time to fix the huge
technical debt so you still need a master storage domain using one of the
traditional legacy options.

To build a setup, you can use:

- engine vm: 6g ram, 2 cpus, centos stream 8
- hosts vm: 4g ram, 2 cpus, centos stream 8
  you can start with one host and add more hosts later if you want to
test migration.
- storage vm: 2g ram, 2 cpus, any os you like, I use alpine since it
takes very little
  memory and its NFS server is fast.

See vdsm README for instructions how to setup a host:
https://github.com/oVirt/vdsm#manual-installation

For engine host you can follow:
https://ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_using_the_command_line/#Enabling_the_Red_Hat_Virtualization_Manager_Repositories_install_RHVM

And after that this should work:

dnf install ovirt-engine
engine-setup

Accepting all the defaults should work.

When you have engine running, you can add a new host with
the ip address or dns name of you host(s) vm, and engine will
do everything for you. Note that you must install the ovirt-release-master
rpm on the host before you add it to engine.

Nir

>
> Cheers,
> Muli
> --
> Muli Ben-Yehuda
> Co-Founder and Chief Scientist @ http://www.lightbitslabs.com
> LightOS: The Special Storage Sauce For Your Cloud
>
>
> On Wed, Feb 23, 2022 at 4:16 PM Nir Soffer  wrote:
>>
>> On Wed, Feb 23, 2022 at 2:48 PM Benny Zlotnik  wrote:
>> >
>> > So I started looking in the logs and tried to follow along with the
>> > code, but things didn't make sense and then I saw it's ovirt 4.3 which
>> > makes things more complicated :)
>> > Unfortunately because GUID is sent in the metadata the volume is
>> > treated as a vdsm managed volume[2] for the udev rule generation and
>> > it prepends the /dev/mapper prefix to an empty string as a result.
>> > I don't have the vdsm logs, so I am not sure where exactly this fails,
>> > but if it's after [4] it may be possible to workaround it with a vdsm
>> > hook
>> >
>> > In 4.4.6 we moved the udev rule triggering the volume mapping phase,
>> > before starting the VM. But it could still not work because we check
>> > the driver_volume_type in[1], and I saw it's "driver_volume_type":
>> > "lightos" for lightbits
>> > In theory it looks like it wouldn't take much to add support for your
>> > driver in a future release (as it's pretty late for 4.5)
>>
>> Adding support for nvme/tcp in 4.3 is probably not feasible, but we will
>> be happy to accept patches for 4.5.
>>
>> To debug such issues vdsm log is the best place to check. We should see
>> the connection info passed to vdsm, and we have pretty simple code using
>> it with os_brick to attach the device to the system and setting up the udev
>> rule (which may need some tweaks).
>>
>> Nir
>>
>> > [1] 
>> > https://github.com/oVirt/vdsm/blob/500c035903dd35180d71c97791e0ce4356fb77ad/lib/vdsm/storage/managedvolume.py#L110
>> >
>> > (4.3)
>> > [2] 
>> > https://github.com/oVirt/vdsm/blob/b42d4a816b538e00ea4955576a5fe762367be787/lib/vdsm/clientIF.py#L451
>> > [3] 
>> > https://github.com/oVirt/vdsm/blob/b42d4a816b538e00ea4955576a5fe762367be787/lib/vdsm/storage/hsm.py#L3141
>> > [4] 
>> > https://github.com/oVirt/vdsm/blob/b42d4a816b538e00ea4955576a5fe762367be787/lib/vdsm/virt/vm.py#L3835
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > On Wed, Feb 23, 2022 at 12:44 PM Muli Ben-Yehuda  
>> > wrote:
>> > >
>> > > Certainly, thanks for your help!
>> > > I put cinderlib and engine.log here: 
>> > > http://www.mulix.org/misc/ovirt-logs-20220223123641.tar.gz
>> > > If you grep for 'mulivm1' you will see for example:
>> > >
>> > > 2022-02-22 04:31:04,473-05 ERROR 
>> > > [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] 
>> > > (default task-10) [36d8a122] Command 'HotPlugDiskVDSCommand(HostName = 
>> > > client1, 
>> > > HotPlugDiskVDSParameters:{hostId='fc5c2860-36b1-4213-843f-10ca7b

[ovirt-users] Re: [=EXTERNAL=] Re: help using nvme/tcp storage with cinderlib and Managed Block Storage

2022-02-23 Thread Muli Ben-Yehuda
Thanks, Nir and Benny (nice to run into you again, Nir!). I'm a neophyte in
ovirt and vdsm... What's the simplest way to set up a development
environment? Is it possible to set up a "standalone" vdsm environment to
hack support for nvme/tcp or do I need "full ovirt" to make it work?

Cheers,
Muli
--
Muli Ben-Yehuda
Co-Founder and Chief Scientist @ http://www.lightbitslabs.com
LightOS: The Special Storage Sauce For Your Cloud


On Wed, Feb 23, 2022 at 4:16 PM Nir Soffer  wrote:

> On Wed, Feb 23, 2022 at 2:48 PM Benny Zlotnik  wrote:
> >
> > So I started looking in the logs and tried to follow along with the
> > code, but things didn't make sense and then I saw it's ovirt 4.3 which
> > makes things more complicated :)
> > Unfortunately because GUID is sent in the metadata the volume is
> > treated as a vdsm managed volume[2] for the udev rule generation and
> > it prepends the /dev/mapper prefix to an empty string as a result.
> > I don't have the vdsm logs, so I am not sure where exactly this fails,
> > but if it's after [4] it may be possible to workaround it with a vdsm
> > hook
> >
> > In 4.4.6 we moved the udev rule triggering the volume mapping phase,
> > before starting the VM. But it could still not work because we check
> > the driver_volume_type in[1], and I saw it's "driver_volume_type":
> > "lightos" for lightbits
> > In theory it looks like it wouldn't take much to add support for your
> > driver in a future release (as it's pretty late for 4.5)
>
> Adding support for nvme/tcp in 4.3 is probably not feasible, but we will
> be happy to accept patches for 4.5.
>
> To debug such issues vdsm log is the best place to check. We should see
> the connection info passed to vdsm, and we have pretty simple code using
> it with os_brick to attach the device to the system and setting up the udev
> rule (which may need some tweaks).
>
> Nir
>
> > [1]
> https://github.com/oVirt/vdsm/blob/500c035903dd35180d71c97791e0ce4356fb77ad/lib/vdsm/storage/managedvolume.py#L110
> >
> > (4.3)
> > [2]
> https://github.com/oVirt/vdsm/blob/b42d4a816b538e00ea4955576a5fe762367be787/lib/vdsm/clientIF.py#L451
> > [3]
> https://github.com/oVirt/vdsm/blob/b42d4a816b538e00ea4955576a5fe762367be787/lib/vdsm/storage/hsm.py#L3141
> > [4]
> https://github.com/oVirt/vdsm/blob/b42d4a816b538e00ea4955576a5fe762367be787/lib/vdsm/virt/vm.py#L3835
> >
> >
> >
> >
> >
> >
> >
> >
> > On Wed, Feb 23, 2022 at 12:44 PM Muli Ben-Yehuda 
> wrote:
> > >
> > > Certainly, thanks for your help!
> > > I put cinderlib and engine.log here:
> http://www.mulix.org/misc/ovirt-logs-20220223123641.tar.gz
> > > If you grep for 'mulivm1' you will see for example:
> > >
> > > 2022-02-22 04:31:04,473-05 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default
> task-10) [36d8a122] Command 'HotPlugDiskVDSCommand(HostName = client1,
> HotPlugDiskVDSParameters:{hostId='fc5c2860-36b1-4213-843f-10ca7b35556c',
> vmId='e13f73a0-8e20-4ec3-837f-aeacc082c7aa',
> diskId='d1e1286b-38cc-4d56-9d4e-f331ffbe830f', addressMap='[bus=0,
> controller=0, unit=2, type=drive, target=0]'})' execution failed:
> VDSGenericException: VDSErrorException: Failed to HotPlugDiskVDS, error =
> Failed to bind /dev/mapper/ on to /var/run/libvirt/qemu/21-mulivm1.mapper.:
> Not a directory, code = 45
> > >
> > > Please let me know what other information will be useful and I will
> prove.
> > >
> > > Cheers,
> > > Muli
> > >
> > > On Wed, Feb 23, 2022 at 11:14 AM Benny Zlotnik 
> wrote:
> > >>
> > >> Hi,
> > >>
> > >> We haven't tested this, and we do not have any code to handle nvme/tcp
> > >> drivers, only iscsi and rbd. Given the path seen in the logs
> > >> '/dev/mapper', it looks like it might require code changes to support
> > >> this.
> > >> Can you share cinderlib[1] and engine logs to see what is returned by
> > >> the driver? I may be able to estimate what would be required (it's
> > >> possible that it would be enough to just change the handling of the
> > >> path in the engine)
> > >>
> > >> [1] /var/log/ovirt-engine/cinderlib/cinderlib//log
> > >>
> > >> On Wed, Feb 23, 2022 at 10:54 AM  wrote:
> > >> >
> > >> > Hi everyone,
> > >> >
> > >> > We are trying to set up ovirt (4.3.10 at the moment, customer
> preference) to use Lightbits (https://www.lightbitslabs.com) storage via
> our openstack cinder driver with cinderlib. The cinderlib and cinder driver
> bits are working fine but when ovirt tries to attach the device to a VM we
> get the following error:
> > >> >
> > >> > libvirt:  error : cannot create file '/var/run/libvirt/qemu/
> 18-mulivm1.dev/mapper/': Is a directory
> > >> >
> > >> > We get the same error regardless of whether I try to run the VM or
> try to attach the device while it is running. The error appears to come
> from vdsm which passes /dev/mapper as the prefered device?
> > >> >
> > >> > 2022-02-22 09:50:11,848-0500 INFO  (vm/3ae7dcf4) [vdsm.api] FINISH
> appropriateDevice return={'path': '/dev/mapper/', 'truesize':
> '53687091200', 'apparents

[ovirt-users] Re: [=EXTERNAL=] Re: help using nvme/tcp storage with cinderlib and Managed Block Storage

2022-02-23 Thread Nir Soffer
On Wed, Feb 23, 2022 at 2:48 PM Benny Zlotnik  wrote:
>
> So I started looking in the logs and tried to follow along with the
> code, but things didn't make sense and then I saw it's ovirt 4.3 which
> makes things more complicated :)
> Unfortunately because GUID is sent in the metadata the volume is
> treated as a vdsm managed volume[2] for the udev rule generation and
> it prepends the /dev/mapper prefix to an empty string as a result.
> I don't have the vdsm logs, so I am not sure where exactly this fails,
> but if it's after [4] it may be possible to workaround it with a vdsm
> hook
>
> In 4.4.6 we moved the udev rule triggering the volume mapping phase,
> before starting the VM. But it could still not work because we check
> the driver_volume_type in[1], and I saw it's "driver_volume_type":
> "lightos" for lightbits
> In theory it looks like it wouldn't take much to add support for your
> driver in a future release (as it's pretty late for 4.5)

Adding support for nvme/tcp in 4.3 is probably not feasible, but we will
be happy to accept patches for 4.5.

To debug such issues vdsm log is the best place to check. We should see
the connection info passed to vdsm, and we have pretty simple code using
it with os_brick to attach the device to the system and setting up the udev
rule (which may need some tweaks).

Nir

> [1] 
> https://github.com/oVirt/vdsm/blob/500c035903dd35180d71c97791e0ce4356fb77ad/lib/vdsm/storage/managedvolume.py#L110
>
> (4.3)
> [2] 
> https://github.com/oVirt/vdsm/blob/b42d4a816b538e00ea4955576a5fe762367be787/lib/vdsm/clientIF.py#L451
> [3] 
> https://github.com/oVirt/vdsm/blob/b42d4a816b538e00ea4955576a5fe762367be787/lib/vdsm/storage/hsm.py#L3141
> [4] 
> https://github.com/oVirt/vdsm/blob/b42d4a816b538e00ea4955576a5fe762367be787/lib/vdsm/virt/vm.py#L3835
>
>
>
>
>
>
>
>
> On Wed, Feb 23, 2022 at 12:44 PM Muli Ben-Yehuda  
> wrote:
> >
> > Certainly, thanks for your help!
> > I put cinderlib and engine.log here: 
> > http://www.mulix.org/misc/ovirt-logs-20220223123641.tar.gz
> > If you grep for 'mulivm1' you will see for example:
> >
> > 2022-02-22 04:31:04,473-05 ERROR 
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default 
> > task-10) [36d8a122] Command 'HotPlugDiskVDSCommand(HostName = client1, 
> > HotPlugDiskVDSParameters:{hostId='fc5c2860-36b1-4213-843f-10ca7b35556c', 
> > vmId='e13f73a0-8e20-4ec3-837f-aeacc082c7aa', 
> > diskId='d1e1286b-38cc-4d56-9d4e-f331ffbe830f', addressMap='[bus=0, 
> > controller=0, unit=2, type=drive, target=0]'})' execution failed: 
> > VDSGenericException: VDSErrorException: Failed to HotPlugDiskVDS, error = 
> > Failed to bind /dev/mapper/ on to /var/run/libvirt/qemu/21-mulivm1.mapper.: 
> > Not a directory, code = 45
> >
> > Please let me know what other information will be useful and I will prove.
> >
> > Cheers,
> > Muli
> >
> > On Wed, Feb 23, 2022 at 11:14 AM Benny Zlotnik  wrote:
> >>
> >> Hi,
> >>
> >> We haven't tested this, and we do not have any code to handle nvme/tcp
> >> drivers, only iscsi and rbd. Given the path seen in the logs
> >> '/dev/mapper', it looks like it might require code changes to support
> >> this.
> >> Can you share cinderlib[1] and engine logs to see what is returned by
> >> the driver? I may be able to estimate what would be required (it's
> >> possible that it would be enough to just change the handling of the
> >> path in the engine)
> >>
> >> [1] /var/log/ovirt-engine/cinderlib/cinderlib//log
> >>
> >> On Wed, Feb 23, 2022 at 10:54 AM  wrote:
> >> >
> >> > Hi everyone,
> >> >
> >> > We are trying to set up ovirt (4.3.10 at the moment, customer 
> >> > preference) to use Lightbits (https://www.lightbitslabs.com) storage via 
> >> > our openstack cinder driver with cinderlib. The cinderlib and cinder 
> >> > driver bits are working fine but when ovirt tries to attach the device 
> >> > to a VM we get the following error:
> >> >
> >> > libvirt:  error : cannot create file 
> >> > '/var/run/libvirt/qemu/18-mulivm1.dev/mapper/': Is a directory
> >> >
> >> > We get the same error regardless of whether I try to run the VM or try 
> >> > to attach the device while it is running. The error appears to come from 
> >> > vdsm which passes /dev/mapper as the prefered device?
> >> >
> >> > 2022-02-22 09:50:11,848-0500 INFO  (vm/3ae7dcf4) [vdsm.api] FINISH 
> >> > appropriateDevice return={'path': '/dev/mapper/', 'truesize': 
> >> > '53687091200', 'apparentsize': '53687091200'} from=internal, 
> >> > task_id=77f40c4e-733d-4d82-b418-aaeb6b912d39 (api:54)
> >> > 2022-02-22 09:50:11,849-0500 INFO  (vm/3ae7dcf4) [vds] prepared volume 
> >> > path: /dev/mapper/ (clientIF:510)
> >> >
> >> > Suggestions for how to debug this further? Is this a known issue? Did 
> >> > anyone get nvme/tcp storage working with ovirt and/or vdsm?
> >> >
> >> > Thanks,
> >> > Muli
> >> >
> >> > ___
> >> > Users mailing list -- users@ovirt.org
> >> > To unsubscribe send an email to users-le.

[ovirt-users] Re: Gluster Performance issues

2022-02-23 Thread Derek Atkins
Have you verified that you're actually getting 10Gbps between the hosts?

-derek

On Wed, February 23, 2022 9:02 am, Alex Morrison wrote:
> Hello Derek,
>
> We have a 10Gig connection dedicated to the storage network, nothing else
> is on that switch.
>
> On Wed, Feb 23, 2022 at 9:49 AM Derek Atkins  wrote:
>
>> Hi,
>>
>> Another question which I don't see answered:   What is the underlying
>> connectivity between the Gluster hosts?
>>
>> -derek
>>
>> On Wed, February 23, 2022 8:39 am, Alex Morrison wrote:
>> > Hello Sunil,
>> >
>> > [root@ovirt1 ~]# gluster --version
>> > glusterfs 8.6
>> >
>> > same on all hosts
>> >
>> > On Wed, Feb 23, 2022 at 5:24 AM Sunil Kumar Heggodu Gopala Acharya <
>> > shegg...@redhat.com> wrote:
>> >
>> >> Hi,
>> >>
>> >> Which version of gluster is in use?
>> >>
>> >> Regards,
>> >>
>> >> Sunil kumar Acharya
>> >>
>> >> Red Hat
>> >>
>> >> 
>> >>
>> >> T: +91-8067935170
>> >> 
>> >>
>> >> 
>> >> TRIED. TESTED. TRUSTED. 
>> >>
>> >>
>> >>
>> >> On Wed, Feb 23, 2022 at 2:17 PM Alex Morrison
>> 
>> >> wrote:
>> >>
>> >>> Hello All,
>> >>>
>> >>> We have 3 servers with a raid 50 array each, we are having extreme
>> >>> performance issues with our gluster, writes on gluster seem to take
>> at
>> >>> least 3 times longer than on the raid directly. Can this be
>> improved?
>> >>> I've
>> >>> read through several other performance issues threads but have been
>> >>> unable
>> >>> to make any improvements
>> >>>
>> >>> "gluster volume info" and "gluster volume profile vmstore info" is
>> >>> below
>> >>>
>> >>>
>> >>>
>> =
>> >>>
>> >>> -Inside Gluster - test took 35+ hours:
>> >>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# bonnie++ -d . -s
>> >>> 600G -n 0 -m TEST -f -b -u root
>> >>> Using uid:0, gid:0.
>> >>> Writing intelligently...done
>> >>> Rewriting...done
>> >>> Reading intelligently...done
>> >>> start 'em...done...done...done...done...done...
>> >>> Version  1.98   --Sequential Output-- --Sequential
>> Input-
>> >>> --Random-
>> >>> -Per Chr- --Block-- -Rewrite- -Per Chr-
>> --Block--
>> >>> --Seeks--
>> >>> Name:Size etc/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
>> %CP
>> >>>  /sec %CP
>> >>> TEST   600G   35.7m  17 5824k   7112m
>> 13
>> >>> 182.7   6
>> >>> Latency5466ms   12754ms  3499ms
>> >>>  1589ms
>> >>>
>> >>>
>> >>>
>> 1.98,1.98,TEST,1,1644359706,600G,,8192,5,,,36598,17,5824,7,,,114950,13,182.7,6,,,5466ms,12754ms,,3499ms,1589ms,,
>> >>>
>> >>>
>> >>>
>> =
>> >>>
>> >>> -Outside Gluster - test took 18 minutes:
>> >>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# bonnie++ -d . -s
>> >>> 600G -n 0 -m TEST -f -b -u root
>> >>> Using uid:0, gid:0.
>> >>> Writing intelligently...done
>> >>> Rewriting...done
>> >>> Reading intelligently...done
>> >>> start 'em...done...done...done...done...done...
>> >>> Version  1.98   --Sequential Output-- --Sequential
>> Input-
>> >>> --Random-
>> >>> -Per Chr- --Block-- -Rewrite- -Per Chr-
>> --Block--
>> >>> --Seeks--
>> >>> Name:Size etc/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
>> %CP
>> >>>  /sec %CP
>> >>> TEST   600G567m  78  149m  30307m
>> 37
>> >>>  83.0  57
>> >>> Latency 205ms4630ms  1450ms
>> >>> 679ms
>> >>>
>> >>>
>> >>>
>> 1.98,1.98,TEST,1,1648288012,600G,,8192,5,,,580384,78,152597,30,,,314533,37,83.0,57,,,205ms,4630ms,,1450ms,679ms,,
>> >>>
>> >>>
>> >>>
>> =
>> >>>
>> >>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# gluster volume
>> info
>> >>> Volume Name: engine
>> >>> Type: Replicate
>> >>> Volume ID: 7ed15c5a-f054-450c-bac9-3ad1b4e5931b
>> >>> Status: Started
>> >>> Snapshot Count: 0
>> >>> Number of Bricks: 1 x 3 = 3
>> >>> Transport-type: tcp
>> >>> Bricks:
>> >>> Brick1: ovirt1-storage.dgi:/gluster_bricks/engine/engine
>> >>> Brick2: ovirt2-storage.dgi:/gluster_bricks/engine/engine
>> >>> Brick3: ovirt3-storage.dgi:/gluster_bricks/engine/engine
>> >>> Options Reconfigured:
>> >>> cluster.granular-entry-heal: enable
>> >>> performance.strict-o-direct: on
>> >>> network.ping-timeout: 30
>> >>> storage.owner-gid: 36
>> >>> storage.owner-uid: 36
>> >>> server.event-threads: 4
>> >>> client.event-threads: 4
>> >>> cluster.choose-local: off
>> >>> user.cifs: off
>> >>> features.shard: on
>> >>> cluster.shd-wait-qlength: 1
>> >>> cluster.shd-max-threads: 8

[ovirt-users] Re: Gluster Performance issues

2022-02-23 Thread Alex Morrison
Hello Derek,

We have a 10Gig connection dedicated to the storage network, nothing else
is on that switch.

On Wed, Feb 23, 2022 at 9:49 AM Derek Atkins  wrote:

> Hi,
>
> Another question which I don't see answered:   What is the underlying
> connectivity between the Gluster hosts?
>
> -derek
>
> On Wed, February 23, 2022 8:39 am, Alex Morrison wrote:
> > Hello Sunil,
> >
> > [root@ovirt1 ~]# gluster --version
> > glusterfs 8.6
> >
> > same on all hosts
> >
> > On Wed, Feb 23, 2022 at 5:24 AM Sunil Kumar Heggodu Gopala Acharya <
> > shegg...@redhat.com> wrote:
> >
> >> Hi,
> >>
> >> Which version of gluster is in use?
> >>
> >> Regards,
> >>
> >> Sunil kumar Acharya
> >>
> >> Red Hat
> >>
> >> 
> >>
> >> T: +91-8067935170
> >> 
> >>
> >> 
> >> TRIED. TESTED. TRUSTED. 
> >>
> >>
> >>
> >> On Wed, Feb 23, 2022 at 2:17 PM Alex Morrison 
> >> wrote:
> >>
> >>> Hello All,
> >>>
> >>> We have 3 servers with a raid 50 array each, we are having extreme
> >>> performance issues with our gluster, writes on gluster seem to take at
> >>> least 3 times longer than on the raid directly. Can this be improved?
> >>> I've
> >>> read through several other performance issues threads but have been
> >>> unable
> >>> to make any improvements
> >>>
> >>> "gluster volume info" and "gluster volume profile vmstore info" is
> >>> below
> >>>
> >>>
> >>>
> =
> >>>
> >>> -Inside Gluster - test took 35+ hours:
> >>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# bonnie++ -d . -s
> >>> 600G -n 0 -m TEST -f -b -u root
> >>> Using uid:0, gid:0.
> >>> Writing intelligently...done
> >>> Rewriting...done
> >>> Reading intelligently...done
> >>> start 'em...done...done...done...done...done...
> >>> Version  1.98   --Sequential Output-- --Sequential Input-
> >>> --Random-
> >>> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> >>> --Seeks--
> >>> Name:Size etc/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
> >>>  /sec %CP
> >>> TEST   600G   35.7m  17 5824k   7112m  13
> >>> 182.7   6
> >>> Latency5466ms   12754ms  3499ms
> >>>  1589ms
> >>>
> >>>
> >>>
> 1.98,1.98,TEST,1,1644359706,600G,,8192,5,,,36598,17,5824,7,,,114950,13,182.7,6,,,5466ms,12754ms,,3499ms,1589ms,,
> >>>
> >>>
> >>>
> =
> >>>
> >>> -Outside Gluster - test took 18 minutes:
> >>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# bonnie++ -d . -s
> >>> 600G -n 0 -m TEST -f -b -u root
> >>> Using uid:0, gid:0.
> >>> Writing intelligently...done
> >>> Rewriting...done
> >>> Reading intelligently...done
> >>> start 'em...done...done...done...done...done...
> >>> Version  1.98   --Sequential Output-- --Sequential Input-
> >>> --Random-
> >>> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> >>> --Seeks--
> >>> Name:Size etc/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
> >>>  /sec %CP
> >>> TEST   600G567m  78  149m  30307m  37
> >>>  83.0  57
> >>> Latency 205ms4630ms  1450ms
> >>> 679ms
> >>>
> >>>
> >>>
> 1.98,1.98,TEST,1,1648288012,600G,,8192,5,,,580384,78,152597,30,,,314533,37,83.0,57,,,205ms,4630ms,,1450ms,679ms,,
> >>>
> >>>
> >>>
> =
> >>>
> >>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# gluster volume
> info
> >>> Volume Name: engine
> >>> Type: Replicate
> >>> Volume ID: 7ed15c5a-f054-450c-bac9-3ad1b4e5931b
> >>> Status: Started
> >>> Snapshot Count: 0
> >>> Number of Bricks: 1 x 3 = 3
> >>> Transport-type: tcp
> >>> Bricks:
> >>> Brick1: ovirt1-storage.dgi:/gluster_bricks/engine/engine
> >>> Brick2: ovirt2-storage.dgi:/gluster_bricks/engine/engine
> >>> Brick3: ovirt3-storage.dgi:/gluster_bricks/engine/engine
> >>> Options Reconfigured:
> >>> cluster.granular-entry-heal: enable
> >>> performance.strict-o-direct: on
> >>> network.ping-timeout: 30
> >>> storage.owner-gid: 36
> >>> storage.owner-uid: 36
> >>> server.event-threads: 4
> >>> client.event-threads: 4
> >>> cluster.choose-local: off
> >>> user.cifs: off
> >>> features.shard: on
> >>> cluster.shd-wait-qlength: 1
> >>> cluster.shd-max-threads: 8
> >>> cluster.locking-scheme: granular
> >>> cluster.data-self-heal-algorithm: full
> >>> cluster.server-quorum-type: server
> >>> cluster.quorum-type: auto
> >>> cluster.eager-lock: enable
> >>> network.remote-dio: off
> >>> performance.low-prio-threads: 32
> >>> performance.io-cache: off
> >>> performance.read-ahea

[ovirt-users] Re: Gluster Performance issues

2022-02-23 Thread Derek Atkins
Hi,

Another question which I don't see answered:   What is the underlying
connectivity between the Gluster hosts?

-derek

On Wed, February 23, 2022 8:39 am, Alex Morrison wrote:
> Hello Sunil,
>
> [root@ovirt1 ~]# gluster --version
> glusterfs 8.6
>
> same on all hosts
>
> On Wed, Feb 23, 2022 at 5:24 AM Sunil Kumar Heggodu Gopala Acharya <
> shegg...@redhat.com> wrote:
>
>> Hi,
>>
>> Which version of gluster is in use?
>>
>> Regards,
>>
>> Sunil kumar Acharya
>>
>> Red Hat
>>
>> 
>>
>> T: +91-8067935170
>> 
>>
>> 
>> TRIED. TESTED. TRUSTED. 
>>
>>
>>
>> On Wed, Feb 23, 2022 at 2:17 PM Alex Morrison 
>> wrote:
>>
>>> Hello All,
>>>
>>> We have 3 servers with a raid 50 array each, we are having extreme
>>> performance issues with our gluster, writes on gluster seem to take at
>>> least 3 times longer than on the raid directly. Can this be improved?
>>> I've
>>> read through several other performance issues threads but have been
>>> unable
>>> to make any improvements
>>>
>>> "gluster volume info" and "gluster volume profile vmstore info" is
>>> below
>>>
>>>
>>> =
>>>
>>> -Inside Gluster - test took 35+ hours:
>>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# bonnie++ -d . -s
>>> 600G -n 0 -m TEST -f -b -u root
>>> Using uid:0, gid:0.
>>> Writing intelligently...done
>>> Rewriting...done
>>> Reading intelligently...done
>>> start 'em...done...done...done...done...done...
>>> Version  1.98   --Sequential Output-- --Sequential Input-
>>> --Random-
>>> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
>>> --Seeks--
>>> Name:Size etc/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
>>>  /sec %CP
>>> TEST   600G   35.7m  17 5824k   7112m  13
>>> 182.7   6
>>> Latency5466ms   12754ms  3499ms
>>>  1589ms
>>>
>>>
>>> 1.98,1.98,TEST,1,1644359706,600G,,8192,5,,,36598,17,5824,7,,,114950,13,182.7,6,,,5466ms,12754ms,,3499ms,1589ms,,
>>>
>>>
>>> =
>>>
>>> -Outside Gluster - test took 18 minutes:
>>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# bonnie++ -d . -s
>>> 600G -n 0 -m TEST -f -b -u root
>>> Using uid:0, gid:0.
>>> Writing intelligently...done
>>> Rewriting...done
>>> Reading intelligently...done
>>> start 'em...done...done...done...done...done...
>>> Version  1.98   --Sequential Output-- --Sequential Input-
>>> --Random-
>>> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
>>> --Seeks--
>>> Name:Size etc/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
>>>  /sec %CP
>>> TEST   600G567m  78  149m  30307m  37
>>>  83.0  57
>>> Latency 205ms4630ms  1450ms
>>> 679ms
>>>
>>>
>>> 1.98,1.98,TEST,1,1648288012,600G,,8192,5,,,580384,78,152597,30,,,314533,37,83.0,57,,,205ms,4630ms,,1450ms,679ms,,
>>>
>>>
>>> =
>>>
>>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# gluster volume info
>>> Volume Name: engine
>>> Type: Replicate
>>> Volume ID: 7ed15c5a-f054-450c-bac9-3ad1b4e5931b
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 1 x 3 = 3
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: ovirt1-storage.dgi:/gluster_bricks/engine/engine
>>> Brick2: ovirt2-storage.dgi:/gluster_bricks/engine/engine
>>> Brick3: ovirt3-storage.dgi:/gluster_bricks/engine/engine
>>> Options Reconfigured:
>>> cluster.granular-entry-heal: enable
>>> performance.strict-o-direct: on
>>> network.ping-timeout: 30
>>> storage.owner-gid: 36
>>> storage.owner-uid: 36
>>> server.event-threads: 4
>>> client.event-threads: 4
>>> cluster.choose-local: off
>>> user.cifs: off
>>> features.shard: on
>>> cluster.shd-wait-qlength: 1
>>> cluster.shd-max-threads: 8
>>> cluster.locking-scheme: granular
>>> cluster.data-self-heal-algorithm: full
>>> cluster.server-quorum-type: server
>>> cluster.quorum-type: auto
>>> cluster.eager-lock: enable
>>> network.remote-dio: off
>>> performance.low-prio-threads: 32
>>> performance.io-cache: off
>>> performance.read-ahead: off
>>> performance.quick-read: off
>>> transport.address-family: inet
>>> storage.fips-mode-rchecksum: on
>>> nfs.disable: on
>>> performance.client-io-threads: on
>>> diagnostics.latency-measurement: on
>>> diagnostics.count-fop-hits: on
>>>
>>> Volume Name: vmstore
>>> Type: Replicate
>>> Volume ID: 2670ff29-8d43-4610-a437-c6ec2c235753
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 1 x 3 = 3
>>> Transport-type: tcp
>>> Bricks:

[ovirt-users] Re: Gluster Performance issues

2022-02-23 Thread Alex Morrison
Hello Sunil,

[root@ovirt1 ~]# gluster --version
glusterfs 8.6

same on all hosts

On Wed, Feb 23, 2022 at 5:24 AM Sunil Kumar Heggodu Gopala Acharya <
shegg...@redhat.com> wrote:

> Hi,
>
> Which version of gluster is in use?
>
> Regards,
>
> Sunil kumar Acharya
>
> Red Hat
>
> 
>
> T: +91-8067935170 
>
> 
> TRIED. TESTED. TRUSTED. 
>
>
>
> On Wed, Feb 23, 2022 at 2:17 PM Alex Morrison 
> wrote:
>
>> Hello All,
>>
>> We have 3 servers with a raid 50 array each, we are having extreme
>> performance issues with our gluster, writes on gluster seem to take at
>> least 3 times longer than on the raid directly. Can this be improved? I've
>> read through several other performance issues threads but have been unable
>> to make any improvements
>>
>> "gluster volume info" and "gluster volume profile vmstore info" is below
>>
>>
>> =
>>
>> -Inside Gluster - test took 35+ hours:
>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# bonnie++ -d . -s
>> 600G -n 0 -m TEST -f -b -u root
>> Using uid:0, gid:0.
>> Writing intelligently...done
>> Rewriting...done
>> Reading intelligently...done
>> start 'em...done...done...done...done...done...
>> Version  1.98   --Sequential Output-- --Sequential Input-
>> --Random-
>> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
>> --Seeks--
>> Name:Size etc/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
>>  /sec %CP
>> TEST   600G   35.7m  17 5824k   7112m  13
>> 182.7   6
>> Latency5466ms   12754ms  3499ms
>>  1589ms
>>
>>
>> 1.98,1.98,TEST,1,1644359706,600G,,8192,5,,,36598,17,5824,7,,,114950,13,182.7,6,,,5466ms,12754ms,,3499ms,1589ms,,
>>
>>
>> =
>>
>> -Outside Gluster - test took 18 minutes:
>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# bonnie++ -d . -s
>> 600G -n 0 -m TEST -f -b -u root
>> Using uid:0, gid:0.
>> Writing intelligently...done
>> Rewriting...done
>> Reading intelligently...done
>> start 'em...done...done...done...done...done...
>> Version  1.98   --Sequential Output-- --Sequential Input-
>> --Random-
>> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
>> --Seeks--
>> Name:Size etc/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
>>  /sec %CP
>> TEST   600G567m  78  149m  30307m  37
>>  83.0  57
>> Latency 205ms4630ms  1450ms
>> 679ms
>>
>>
>> 1.98,1.98,TEST,1,1648288012,600G,,8192,5,,,580384,78,152597,30,,,314533,37,83.0,57,,,205ms,4630ms,,1450ms,679ms,,
>>
>>
>> =
>>
>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# gluster volume info
>> Volume Name: engine
>> Type: Replicate
>> Volume ID: 7ed15c5a-f054-450c-bac9-3ad1b4e5931b
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x 3 = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: ovirt1-storage.dgi:/gluster_bricks/engine/engine
>> Brick2: ovirt2-storage.dgi:/gluster_bricks/engine/engine
>> Brick3: ovirt3-storage.dgi:/gluster_bricks/engine/engine
>> Options Reconfigured:
>> cluster.granular-entry-heal: enable
>> performance.strict-o-direct: on
>> network.ping-timeout: 30
>> storage.owner-gid: 36
>> storage.owner-uid: 36
>> server.event-threads: 4
>> client.event-threads: 4
>> cluster.choose-local: off
>> user.cifs: off
>> features.shard: on
>> cluster.shd-wait-qlength: 1
>> cluster.shd-max-threads: 8
>> cluster.locking-scheme: granular
>> cluster.data-self-heal-algorithm: full
>> cluster.server-quorum-type: server
>> cluster.quorum-type: auto
>> cluster.eager-lock: enable
>> network.remote-dio: off
>> performance.low-prio-threads: 32
>> performance.io-cache: off
>> performance.read-ahead: off
>> performance.quick-read: off
>> transport.address-family: inet
>> storage.fips-mode-rchecksum: on
>> nfs.disable: on
>> performance.client-io-threads: on
>> diagnostics.latency-measurement: on
>> diagnostics.count-fop-hits: on
>>
>> Volume Name: vmstore
>> Type: Replicate
>> Volume ID: 2670ff29-8d43-4610-a437-c6ec2c235753
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x 3 = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: ovirt1-storage.dgi:/gluster_bricks/vmstore/vmstore
>> Brick2: ovirt2-storage.dgi:/gluster_bricks/vmstore/vmstore
>> Brick3: ovirt3-storage.dgi:/gluster_bricks/vmstore/vmstore
>> Options Reconfigured:
>> cluster.granular-entry-heal: enable
>> performance.strict-o-direct: on
>> network.ping-timeout: 20
>> storage.owner-gid: 36
>> stor

[ovirt-users] Re: VMs losing network interfaces

2022-02-23 Thread jb
Yes I know, it was a bad workaround, but somehow debian had issues with 
auto mount cifs. I fix it now with enabling 
systemd-networkd-wait-online, but after I had to override the nginx 
service to, to wait for network-online.target.


Am 21.02.22 um 18:14 schrieb Strahil Nikolov:

Don't do that.
Use systemd automount or autofs to fix issues with FS.

Best Regards,
Strahil Nikolov

On Mon, Feb 21, 2022 at 12:48, jb
 wrote:
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/KP5XNSTGQU6ZNHGXMQNGCE6AUTMUIILQ/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QB5GLYRBTVJE42O7QM2EARU2TRV52FUJ/


[ovirt-users] Re: [=EXTERNAL=] Re: help using nvme/tcp storage with cinderlib and Managed Block Storage

2022-02-23 Thread Benny Zlotnik
So I started looking in the logs and tried to follow along with the
code, but things didn't make sense and then I saw it's ovirt 4.3 which
makes things more complicated :)
Unfortunately because GUID is sent in the metadata the volume is
treated as a vdsm managed volume[2] for the udev rule generation and
it prepends the /dev/mapper prefix to an empty string as a result.
I don't have the vdsm logs, so I am not sure where exactly this fails,
but if it's after [4] it may be possible to workaround it with a vdsm
hook

In 4.4.6 we moved the udev rule triggering the volume mapping phase,
before starting the VM. But it could still not work because we check
the driver_volume_type in[1], and I saw it's "driver_volume_type":
"lightos" for lightbits
In theory it looks like it wouldn't take much to add support for your
driver in a future release (as it's pretty late for 4.5)

[1] 
https://github.com/oVirt/vdsm/blob/500c035903dd35180d71c97791e0ce4356fb77ad/lib/vdsm/storage/managedvolume.py#L110

(4.3)
[2] 
https://github.com/oVirt/vdsm/blob/b42d4a816b538e00ea4955576a5fe762367be787/lib/vdsm/clientIF.py#L451
[3] 
https://github.com/oVirt/vdsm/blob/b42d4a816b538e00ea4955576a5fe762367be787/lib/vdsm/storage/hsm.py#L3141
[4] 
https://github.com/oVirt/vdsm/blob/b42d4a816b538e00ea4955576a5fe762367be787/lib/vdsm/virt/vm.py#L3835








On Wed, Feb 23, 2022 at 12:44 PM Muli Ben-Yehuda  wrote:
>
> Certainly, thanks for your help!
> I put cinderlib and engine.log here: 
> http://www.mulix.org/misc/ovirt-logs-20220223123641.tar.gz
> If you grep for 'mulivm1' you will see for example:
>
> 2022-02-22 04:31:04,473-05 ERROR 
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default 
> task-10) [36d8a122] Command 'HotPlugDiskVDSCommand(HostName = client1, 
> HotPlugDiskVDSParameters:{hostId='fc5c2860-36b1-4213-843f-10ca7b35556c', 
> vmId='e13f73a0-8e20-4ec3-837f-aeacc082c7aa', 
> diskId='d1e1286b-38cc-4d56-9d4e-f331ffbe830f', addressMap='[bus=0, 
> controller=0, unit=2, type=drive, target=0]'})' execution failed: 
> VDSGenericException: VDSErrorException: Failed to HotPlugDiskVDS, error = 
> Failed to bind /dev/mapper/ on to /var/run/libvirt/qemu/21-mulivm1.mapper.: 
> Not a directory, code = 45
>
> Please let me know what other information will be useful and I will prove.
>
> Cheers,
> Muli
>
> On Wed, Feb 23, 2022 at 11:14 AM Benny Zlotnik  wrote:
>>
>> Hi,
>>
>> We haven't tested this, and we do not have any code to handle nvme/tcp
>> drivers, only iscsi and rbd. Given the path seen in the logs
>> '/dev/mapper', it looks like it might require code changes to support
>> this.
>> Can you share cinderlib[1] and engine logs to see what is returned by
>> the driver? I may be able to estimate what would be required (it's
>> possible that it would be enough to just change the handling of the
>> path in the engine)
>>
>> [1] /var/log/ovirt-engine/cinderlib/cinderlib//log
>>
>> On Wed, Feb 23, 2022 at 10:54 AM  wrote:
>> >
>> > Hi everyone,
>> >
>> > We are trying to set up ovirt (4.3.10 at the moment, customer preference) 
>> > to use Lightbits (https://www.lightbitslabs.com) storage via our openstack 
>> > cinder driver with cinderlib. The cinderlib and cinder driver bits are 
>> > working fine but when ovirt tries to attach the device to a VM we get the 
>> > following error:
>> >
>> > libvirt:  error : cannot create file 
>> > '/var/run/libvirt/qemu/18-mulivm1.dev/mapper/': Is a directory
>> >
>> > We get the same error regardless of whether I try to run the VM or try to 
>> > attach the device while it is running. The error appears to come from vdsm 
>> > which passes /dev/mapper as the prefered device?
>> >
>> > 2022-02-22 09:50:11,848-0500 INFO  (vm/3ae7dcf4) [vdsm.api] FINISH 
>> > appropriateDevice return={'path': '/dev/mapper/', 'truesize': 
>> > '53687091200', 'apparentsize': '53687091200'} from=internal, 
>> > task_id=77f40c4e-733d-4d82-b418-aaeb6b912d39 (api:54)
>> > 2022-02-22 09:50:11,849-0500 INFO  (vm/3ae7dcf4) [vds] prepared volume 
>> > path: /dev/mapper/ (clientIF:510)
>> >
>> > Suggestions for how to debug this further? Is this a known issue? Did 
>> > anyone get nvme/tcp storage working with ovirt and/or vdsm?
>> >
>> > Thanks,
>> > Muli
>> >
>> > ___
>> > Users mailing list -- users@ovirt.org
>> > To unsubscribe send an email to users-le...@ovirt.org
>> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> > oVirt Code of Conduct: 
>> > https://www.ovirt.org/community/about/community-guidelines/
>> > List Archives: 
>> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/I3PAG5HMBHUOJYPAI5ES3JHG6HCC3S6N/
>>
>
> Lightbits Labs
> Lead the cloud-native data center transformation by delivering scalable and 
> efficient software defined storage that is easy to consume.
>
> This message is sent in confidence for the addressee only.  It may contain 
> legally privileged information. The contents are not to be disclosed to 
> anyone other tha

[ovirt-users] Re: [ANN] Schedule for oVirt 4.5.0

2022-02-23 Thread Sandro Bonazzola
Il giorno mer 23 feb 2022 alle ore 11:36 Gilboa Davara
 ha scritto:
>
> Hello,
>
> Gluster is still mentioned in the release page.
> Will it be supported as a storage backend in 4.5?


As RHGS is going end of life in 2024 it is being deprecated for RHV.
The upstream Gluster project has no plan for going end of life as far
as I know so there is no reason to remove the possibility of using
gluster as storage backend in oVirt.
There's no plan to completely remove support for Gluster as a storage backend.

>
>
> - Gilboa
>
>
> On Tue, Feb 22, 2022 at 4:57 PM Sandro Bonazzola  wrote:
>>
>> The oVirt development team leads are pleased to inform that the
>> schedule for oVirt 4.5.0 has been finalized.
>>
>> The key dates follows:
>>
>> * Feature Freeze - String Freeze - Alpha release: 2022-03-15
>> * Alpha release test day: 2022-03-17
>> * Code freeze - Beta release: 2022-03-29
>> * Beta release test day: 2022-03-31
>> * General Availability release: 2022-04-12
>>
>> A release management draft page has been created at:
>> https://www.ovirt.org/release/4.5.0/
>>
>> If you're willing to help testing the release during the test days
>> please join the oVirt development mailing list at
>> https://lists.ovirt.org/archives/list/de...@ovirt.org/ and report your
>> feedback there.
>> Instructions for installing oVirt 4.5.0 Alpha and oVirt 4.5.0 Beta for
>> testing will be added to the release page
>> https://www.ovirt.org/release/4.5.0/ when the corresponding version
>> will be released.
>>
>> Professional Services, Integrators and Backup vendors: please plan a
>> test session against your additional services, integrated solutions,
>> downstream rebuilds, backup solution accordingly.
>> If you're not listed here:
>> https://ovirt.org/community/user-stories/users-and-providers.html
>> consider adding your company there.
>>
>> If you're willing to help updating the localization for oVirt 4.5.0
>> please follow https://ovirt.org/develop/localization.html
>>
>> If you're willing to help promoting the oVirt 4.5.0 release you can
>> submit your banner proposals for the oVirt home page and for the
>> social media advertising at https://github.com/oVirt/ovirt-site/issues
>> As an alternative please consider submitting a case study as in
>> https://ovirt.org/community/user-stories/user-stories.html
>>
>> Feature owners: please start planning a presentation of your feature
>> for oVirt Youtube channel: https://www.youtube.com/c/ovirtproject
>>
>> Do you want to contribute to getting ready for this release?
>> Read more about oVirt community at https://ovirt.org/community/ and
>> join the oVirt developers https://ovirt.org/develop/
>>
>> Thanks,
>> --
>>
>> Sandro Bonazzola
>> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>> Red Hat EMEA
>> sbona...@redhat.com
>> Red Hat respects your work life balance. Therefore there is no need to
>> answer this email out of your office hours.
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/7646LEQIHL76HIJTAZWCXWAHT3M6V47C/



-- 

Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA
sbona...@redhat.com
Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q5MLRGDCO6LYLKIVN7JBEAKC4757WWBM/


[ovirt-users] Re: [=EXTERNAL=] Re: help using nvme/tcp storage with cinderlib and Managed Block Storage

2022-02-23 Thread Muli Ben-Yehuda
Certainly, thanks for your help!
I put cinderlib and engine.log here:
http://www.mulix.org/misc/ovirt-logs-20220223123641.tar.gz
If you grep for 'mulivm1' you will see for example:

2022-02-22 04:31:04,473-05 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default
task-10) [36d8a122] Command 'HotPlugDiskVDSCommand(HostName = client1,
HotPlugDiskVDSParameters:{hostId='fc5c2860-36b1-4213-843f-10ca7b35556c',
vmId='e13f73a0-8e20-4ec3-837f-aeacc082c7aa',
diskId='d1e1286b-38cc-4d56-9d4e-f331ffbe830f', addressMap='[bus=0,
controller=0, unit=2, type=drive, target=0]'})' execution failed:
VDSGenericException: VDSErrorException: Failed to HotPlugDiskVDS, error =
Failed to bind /dev/mapper/ on to /var/run/libvirt/qemu/21-mulivm1.mapper.:
Not a directory, code = 45

Please let me know what other information will be useful and I will prove.

Cheers,
Muli

On Wed, Feb 23, 2022 at 11:14 AM Benny Zlotnik  wrote:

> Hi,
>
> We haven't tested this, and we do not have any code to handle nvme/tcp
> drivers, only iscsi and rbd. Given the path seen in the logs
> '/dev/mapper', it looks like it might require code changes to support
> this.
> Can you share cinderlib[1] and engine logs to see what is returned by
> the driver? I may be able to estimate what would be required (it's
> possible that it would be enough to just change the handling of the
> path in the engine)
>
> [1] /var/log/ovirt-engine/cinderlib/cinderlib//log
>
> On Wed, Feb 23, 2022 at 10:54 AM  wrote:
> >
> > Hi everyone,
> >
> > We are trying to set up ovirt (4.3.10 at the moment, customer
> preference) to use Lightbits (https://www.lightbitslabs.com) storage via
> our openstack cinder driver with cinderlib. The cinderlib and cinder driver
> bits are working fine but when ovirt tries to attach the device to a VM we
> get the following error:
> >
> > libvirt:  error : cannot create file '/var/run/libvirt/qemu/
> 18-mulivm1.dev/mapper/': Is a directory
> >
> > We get the same error regardless of whether I try to run the VM or try
> to attach the device while it is running. The error appears to come from
> vdsm which passes /dev/mapper as the prefered device?
> >
> > 2022-02-22 09:50:11,848-0500 INFO  (vm/3ae7dcf4) [vdsm.api] FINISH
> appropriateDevice return={'path': '/dev/mapper/', 'truesize':
> '53687091200', 'apparentsize': '53687091200'} from=internal,
> task_id=77f40c4e-733d-4d82-b418-aaeb6b912d39 (api:54)
> > 2022-02-22 09:50:11,849-0500 INFO  (vm/3ae7dcf4) [vds] prepared volume
> path: /dev/mapper/ (clientIF:510)
> >
> > Suggestions for how to debug this further? Is this a known issue? Did
> anyone get nvme/tcp storage working with ovirt and/or vdsm?
> >
> > Thanks,
> > Muli
> >
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/I3PAG5HMBHUOJYPAI5ES3JHG6HCC3S6N/
>
>

-- 


*Lightbits Labs**
*Lead the cloud-native data center
transformation by 
delivering *scalable *and *efficient *software
defined storage that is 
*easy *to consume.



*This message is sent in confidence for the addressee 
only.  It
may contain legally privileged information. The contents are not 
to be
disclosed to anyone other than the addressee. Unauthorized recipients 
are
requested to preserve this confidentiality, advise the sender 
immediately of
any error in transmission and delete the email from their 
systems.*

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FKGTRKMIR2WKCXWOBLKKCVRMGHJTYWKD/


[ovirt-users] Re: hosted engine deployment (v4.4.10) - TASK Check engine VM health - fatal FAILED

2022-02-23 Thread Gilboa Davara
On Mon, Feb 21, 2022 at 12:07 PM Strahil Nikolov 
wrote:

> You can blacklist packages in dnf with specific version, and thus you
> don't need to blacklist from repo.
>
> Best Regards,
> Strahil Nikolov
>
>
Hello,

Understood.
Perr your qemu 6.2 question, how can I test it? Is it packaged in some
testing repo?

- Gilboa


> On Mon, Feb 21, 2022 at 10:33, Gilboa Davara
>  wrote:
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/SJZNIGOZXWC44RMUGO73BO5BIWFGELHT/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZF3X276Y2WS34RVF7DZ3FLH5UCYUBDZN/


[ovirt-users] Re: [ANN] Schedule for oVirt 4.5.0

2022-02-23 Thread Gilboa Davara
Hello,

Gluster is still mentioned in the release page.
Will it be supported as a storage backend in 4.5?

- Gilboa


On Tue, Feb 22, 2022 at 4:57 PM Sandro Bonazzola 
wrote:

> The oVirt development team leads are pleased to inform that the
> schedule for oVirt 4.5.0 has been finalized.
>
> The key dates follows:
>
> * Feature Freeze - String Freeze - Alpha release: 2022-03-15
> * Alpha release test day: 2022-03-17
> * Code freeze - Beta release: 2022-03-29
> * Beta release test day: 2022-03-31
> * General Availability release: 2022-04-12
>
> A release management draft page has been created at:
> https://www.ovirt.org/release/4.5.0/
>
> If you're willing to help testing the release during the test days
> please join the oVirt development mailing list at
> https://lists.ovirt.org/archives/list/de...@ovirt.org/ and report your
> feedback there.
> Instructions for installing oVirt 4.5.0 Alpha and oVirt 4.5.0 Beta for
> testing will be added to the release page
> https://www.ovirt.org/release/4.5.0/ when the corresponding version
> will be released.
>
> Professional Services, Integrators and Backup vendors: please plan a
> test session against your additional services, integrated solutions,
> downstream rebuilds, backup solution accordingly.
> If you're not listed here:
> https://ovirt.org/community/user-stories/users-and-providers.html
> consider adding your company there.
>
> If you're willing to help updating the localization for oVirt 4.5.0
> please follow https://ovirt.org/develop/localization.html
>
> If you're willing to help promoting the oVirt 4.5.0 release you can
> submit your banner proposals for the oVirt home page and for the
> social media advertising at https://github.com/oVirt/ovirt-site/issues
> As an alternative please consider submitting a case study as in
> https://ovirt.org/community/user-stories/user-stories.html
>
> Feature owners: please start planning a presentation of your feature
> for oVirt Youtube channel: https://www.youtube.com/c/ovirtproject
>
> Do you want to contribute to getting ready for this release?
> Read more about oVirt community at https://ovirt.org/community/ and
> join the oVirt developers https://ovirt.org/develop/
>
> Thanks,
> --
>
> Sandro Bonazzola
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
> Red Hat EMEA
> sbona...@redhat.com
> Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/7646LEQIHL76HIJTAZWCXWAHT3M6V47C/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XOUR7ME5EGYPJKL6YK3QEBX2AOLGCREP/


[ovirt-users] Re: help using nvme/tcp storage with cinderlib and Managed Block Storage

2022-02-23 Thread Benny Zlotnik
Hi,

We haven't tested this, and we do not have any code to handle nvme/tcp
drivers, only iscsi and rbd. Given the path seen in the logs
'/dev/mapper', it looks like it might require code changes to support
this.
Can you share cinderlib[1] and engine logs to see what is returned by
the driver? I may be able to estimate what would be required (it's
possible that it would be enough to just change the handling of the
path in the engine)

[1] /var/log/ovirt-engine/cinderlib/cinderlib//log

On Wed, Feb 23, 2022 at 10:54 AM  wrote:
>
> Hi everyone,
>
> We are trying to set up ovirt (4.3.10 at the moment, customer preference) to 
> use Lightbits (https://www.lightbitslabs.com) storage via our openstack 
> cinder driver with cinderlib. The cinderlib and cinder driver bits are 
> working fine but when ovirt tries to attach the device to a VM we get the 
> following error:
>
> libvirt:  error : cannot create file 
> '/var/run/libvirt/qemu/18-mulivm1.dev/mapper/': Is a directory
>
> We get the same error regardless of whether I try to run the VM or try to 
> attach the device while it is running. The error appears to come from vdsm 
> which passes /dev/mapper as the prefered device?
>
> 2022-02-22 09:50:11,848-0500 INFO  (vm/3ae7dcf4) [vdsm.api] FINISH 
> appropriateDevice return={'path': '/dev/mapper/', 'truesize': '53687091200', 
> 'apparentsize': '53687091200'} from=internal, 
> task_id=77f40c4e-733d-4d82-b418-aaeb6b912d39 (api:54)
> 2022-02-22 09:50:11,849-0500 INFO  (vm/3ae7dcf4) [vds] prepared volume path: 
> /dev/mapper/ (clientIF:510)
>
> Suggestions for how to debug this further? Is this a known issue? Did anyone 
> get nvme/tcp storage working with ovirt and/or vdsm?
>
> Thanks,
> Muli
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/I3PAG5HMBHUOJYPAI5ES3JHG6HCC3S6N/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DVVE74VJYR3IIE53AOPG7XDLZJDUEMD6/


[ovirt-users] Re: how to convert centos 8 to centos 8 Stream

2022-02-23 Thread Sketch

On Wed, 23 Feb 2022, Adam Xu wrote:


How can we convert centos 8 to centos 8 stream? Thanks.


dnf install centos-release-stream
dnf swap centos-{linux,stream}-repos
dnf distro-sync

Note that the last command is effectively a yum update that syncs your 
packages with all of the installed repos, so make sure you install the 
latest ovirt-release44 package with the working mirror URLs before you run 
it, or you might end up with some (or all) oVirt packages removed.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JOHWT2LYMOEQW7IKUSIB7K6OS4T26V2P/


[ovirt-users] help using nvme/tcp storage with cinderlib and Managed Block Storage

2022-02-23 Thread muli
Hi everyone,

We are trying to set up ovirt (4.3.10 at the moment, customer preference) to 
use Lightbits (https://www.lightbitslabs.com) storage via our openstack cinder 
driver with cinderlib. The cinderlib and cinder driver bits are working fine 
but when ovirt tries to attach the device to a VM we get the following error:

libvirt:  error : cannot create file 
'/var/run/libvirt/qemu/18-mulivm1.dev/mapper/': Is a directory

We get the same error regardless of whether I try to run the VM or try to 
attach the device while it is running. The error appears to come from vdsm 
which passes /dev/mapper as the prefered device? 

2022-02-22 09:50:11,848-0500 INFO  (vm/3ae7dcf4) [vdsm.api] FINISH 
appropriateDevice return={'path': '/dev/mapper/', 'truesize': '53687091200', 
'apparentsize': '53687091200'} from=internal, 
task_id=77f40c4e-733d-4d82-b418-aaeb6b912d39 (api:54)
2022-02-22 09:50:11,849-0500 INFO  (vm/3ae7dcf4) [vds] prepared volume path: 
/dev/mapper/ (clientIF:510)

Suggestions for how to debug this further? Is this a known issue? Did anyone 
get nvme/tcp storage working with ovirt and/or vdsm?

Thanks,
Muli

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I3PAG5HMBHUOJYPAI5ES3JHG6HCC3S6N/


[ovirt-users] Gluster Performance issues

2022-02-23 Thread Alex Morrison
Hello All,

We have 3 servers with a raid 50 array each, we are having extreme
performance issues with our gluster, writes on gluster seem to take at
least 3 times longer than on the raid directly. Can this be improved? I've
read through several other performance issues threads but have been unable
to make any improvements

"gluster volume info" and "gluster volume profile vmstore info" is below

=

-Inside Gluster - test took 35+ hours:
[root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# bonnie++ -d . -s 600G
-n 0 -m TEST -f -b -u root
Using uid:0, gid:0.
Writing intelligently...done
Rewriting...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Version  1.98   --Sequential Output-- --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Name:Size etc/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
%CP
TEST   600G   35.7m  17 5824k   7112m  13 182.7
  6
Latency5466ms   12754ms  3499ms
 1589ms

1.98,1.98,TEST,1,1644359706,600G,,8192,5,,,36598,17,5824,7,,,114950,13,182.7,6,,,5466ms,12754ms,,3499ms,1589ms,,

=

-Outside Gluster - test took 18 minutes:
[root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# bonnie++ -d . -s 600G
-n 0 -m TEST -f -b -u root
Using uid:0, gid:0.
Writing intelligently...done
Rewriting...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Version  1.98   --Sequential Output-- --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Name:Size etc/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
%CP
TEST   600G567m  78  149m  30307m  37  83.0
 57
Latency 205ms4630ms  1450ms
679ms

1.98,1.98,TEST,1,1648288012,600G,,8192,5,,,580384,78,152597,30,,,314533,37,83.0,57,,,205ms,4630ms,,1450ms,679ms,,

=

[root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# gluster volume info
Volume Name: engine
Type: Replicate
Volume ID: 7ed15c5a-f054-450c-bac9-3ad1b4e5931b
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1-storage.dgi:/gluster_bricks/engine/engine
Brick2: ovirt2-storage.dgi:/gluster_bricks/engine/engine
Brick3: ovirt3-storage.dgi:/gluster_bricks/engine/engine
Options Reconfigured:
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: on
diagnostics.latency-measurement: on
diagnostics.count-fop-hits: on

Volume Name: vmstore
Type: Replicate
Volume ID: 2670ff29-8d43-4610-a437-c6ec2c235753
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1-storage.dgi:/gluster_bricks/vmstore/vmstore
Brick2: ovirt2-storage.dgi:/gluster_bricks/vmstore/vmstore
Brick3: ovirt3-storage.dgi:/gluster_bricks/vmstore/vmstore
Options Reconfigured:
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
network.ping-timeout: 20
storage.owner-gid: 36
storage.owner-uid: 36
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: on
diagnostics.latency-measurement: on
diagnostics.count-fop-hits: on
server.tcp-user-timeout: 20
server.keepalive-time: 10
server.keepalive-interval: 2
server.keepalive-count: 5
cluster.lookup-optimize: off

==

[ovirt-users] Re: Install hosted engine using fcoe

2022-02-23 Thread agarioqv
Hello. To install self-hosted engine using hosted-engine --deploy: # yum 
install ovirt-hosted-engine-setup.
To install self-hosted engine using https://krunkerio.io the Cockpit user 
interface: yum install cockpit-ovirt-dashboard. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/O737LDUTDFXYF5WLHWRQNUREWVTIR2NJ/