Ovirt is using the default shard file  of 64MB and I don't think this is 'small 
file' at all.
There are  a lot of tunables  to optimize Gluster and I can admit it's not an 
easy task.

Deadline  is  good for databases,  but with SSDs  you should try the 
performance of enabled  multiqueue  and the 'none' scheduler. By default EL7 is 
 not using multiqueue.

PV alignment is  important  and you can implement it by destroying the brick  
and recreating it .You need  the stripe size  and the stripe width (stripe size 
 x data disk in the raid) during the pvcreate.  In your case  you  could  
consider using the SSDs in JBOD mode,  as in case  of  failiure  you will 
recover  only 1  disk. In raid0 the ammount of I/O  towards  the disks  will be 
the same  ,  which can lead  to predictive failiure  at the same  time.

For more  details ,  check RHGS  3.5 Administration Guide.

The  rhgs-random-io is  optimized  for Gluster  workloads  and can bring 
benefits. In my case,  I use a mix  of rhgs-random-io and virtual-host.


The XFS isize  looks  OK.

You should  apply the settings from the virt group, if you wish the  optimal 
settings.


WARNING: NEVER disable  sharding once  enabled  -> stays enabled!


Best  Regards,
Strahil  Nikolov




На 29 юни 2020 г. 1:33:20 GMT+03:00, Jayme <jay...@gmail.com> написа:
>I’ve tried various methods to improve gluster performance on similar
>hardware and never had much luck. Small file workloads were
>particularly
>troublesome. I ended up switching high performance vms to nfs storage
>and
>performance with nfs improved greatly in my use case.
>
>On Sun, Jun 28, 2020 at 6:42 PM shadow emy <shadow.e...@gmail.com>
>wrote:
>
>> > Hello ,
>>
>> Hello and thank you for the reply.Bellow are the answers to your
>questions.
>> >
>> > Let me ask some questions:
>> > 1. What is the scheduler for your PV ?
>>
>>
>> On the Raid Controller device where the SSD disks are in Raid 0
>(device
>> sda) it is set to "deadline". But on the lvm volume logical volume
>dm-7,
>> where the logical block is set for "data" volunr it is set to none.(i
>think
>> this is ok )
>>
>>
>> [root@host1 ~]# ls -al /dev/mapper/gluster_vg_sd
>> v_data ter_l
>> lrwxrwxrwx. 1 root root 7 Jun 28 14:14 /dev/mapper/gluster_v
>> g_sda3-gluster_lv_data -> ../dm-7
>> [root@host1 ~]# cat /sys/block/dm-7/queue/scheduler
>> none
>> root@host1:~[root@host1 ~]# cat /sys/block/dm-7/queue/schedu
>> [root@host1 ~]# cat /sys/block/sda/queue/scheduler
>> noop [deadline] cfq
>>
>>
>>
>> > 2. Have you aligned your PV during the setup 'pvcreate
>> --dataalignment alignment_value
>> > device'
>>
>>
>> I did not made other alignment then the default.Bellow are the
>partitions
>> on /dev/sda.
>> Can i enable partition alignment now, if yes how ?
>>
>> sfdisk -d /dev/sda
>> # partition table of /dev/sda
>> unit: sectors
>>
>> /dev/sda1 : start=     2048, size=   487424, Id=83, bootable
>> /dev/sda2 : start=   489472, size= 95731712, Id=8e
>> /dev/sda3 : start= 96221184, size=3808675840, Id=83
>> /dev/sda4 : start=        0, size=        0, Id= 0
>>
>>
>>
>> > 3. What is your tuned profile ? Do you use rhgs-random-io from
>> > the
>>
>ftp://ftp.redhat.com/redhat/linux/enterprise/7Server/en/RHS/SRPMS/red...
>> > ?
>>
>> My tuned active profile is virtual-host
>>
>> Current active profile: virtual-host
>>
>>  No i dont use any of the rhgs-random-io profiles
>>
>> > 4. What is the output of "xfs_info /path/to/your/gluster/brick" ?
>>
>> xfs_info /gluster_bricks/data
>> meta-data=/dev/mapper/gluster_vg_sda3-gluster_lv_data isize=
>> 512    agcount=32, agsize=6553600 blks
>>          =                       sectsz=512   attr=2, projid
>> 32bit=1
>>          =                       crc=1        finobt=0 spino
>> des=0
>> data     =                       bsize=4096   blocks=2097152
>> 00, imaxpct=25
>>          =                       sunit=64     swidth=64 blks
>> naming   =version 2              bsize=8192   ascii-ci=0 fty
>> pe=1
>> log      =internal               bsize=4096   blocks=102400,
>>  version=2
>>          =                       sectsz=512   sunit=64 blks,
>>  lazy-count=1
>> realtime =none                   extsz=4096   blocks=0, rtex
>> tents=0
>>
>> > 5. Are you using Jumbo Frames ? Does your infra support them?
>> > Usually MTU of 9k is standard, but some switches and NICs support
>up to
>> 16k.
>> >
>>
>> Unfortunately  I can not enable MTU to 9000 and Jumbo Frames on these
>> Cisco SG350X switches to specific ports.The switches  dont suport
>Jumbo
>> Frames enable  to a single port, only on all ports .
>> I have others devices connected to the switches on the remaining 48
>ports
>> that have  1Gb/s.
>>
>> > All the options for "optimize for virt...." are located
>> > at /var/lib/glusterd/groups/virt on each gluster node.
>>
>> I have already looked  previously at that file, but not all the
>volume
>> settings  that are set by "Optime for Virt Store" are stored there.
>> For example  "Optimize for Virt Store " sets network.remote.dio   to
>> disable and in the glusterd/groups/virt is set to enabled.Or
>> cluster.granular-entry-heal: enable is not present there, bit it is
>set by
>> "Optimize for Virt Store"
>>
>> >
>> > Best Regards,
>> > Strahil Nikolov
>> >
>> >
>> >
>> >
>> > В неделя, 28 юни 2020 г., 22:13:09 Гринуич+3, jury cat
><shadow.emy1(a)
>> gmail.com&gt;
>> > написа:
>> >
>> >
>> >
>> >
>> >
>> > Hello all,
>> >
>> > I am using Ovirt 4.3.10 on Centos 7.8 with glusterfs 6.9 .
>> > My Gluster setup is of 3 hosts in replica 3 (2 hosts + 1 arbiter).
>> > All the 3 hosts are Dell R720  with Perc Raid Controller H710
>mini(that
>> has maximim
>> > throughtout 6Gbs)  and  with 2×1TB samsumg SSD in RAID 0. The
>volume is
>> partitioned using
>> > LVM thin provision and formated XFS.
>> > The hosts have separate 10GE network cards for storage traffic.
>> > The Gluster Network is connected to this 10GE network cards and is
>> mounted using Fuse
>> > Glusterfs(NFS is disabled).Also Migration Network is activated on
>the
>> same storage
>> > network.
>> >
>> >
>> > The problem is that the 10GE network is not used at full potential
>by
>> the Gluster.
>> > If i do live Migration of Vms i can see speeds of 7GB/s ~ 9GB/s.
>> > The same network tests using iperf3 reported 9.9GB/s ,  these
>exluding
>> the network setup
>> > as a bottleneck(i will not paste all the iperf3 tests here for
>now).
>> > I did not enable all the Volume options  from "Optimize for Virt
>Store",
>> because
>> > of the bug that cant set volume  cluster.granural-heal to
>enable(this
>> was fixed in vdsm-4
>> > 40, but that is working only on Centos 8 with ovirt 4.4 ) .
>> > i whould be happy to know what are all these "Optimize for Virt
>Store"
>> options,
>> > so i can set them manually.
>> >
>> >
>> > The speed on the disk inside the host using dd is b etween 1GB/s to
>> 700Mbs.
>> >
>> >
>> > [root@host1 ~]# dd if=/dev/zero of=test bs=100M count=40 cou nt=80
>> status=progress
>> > 8074035200 bytes (8.1 GB) copied, 11.059372 s, 730 MB/s 80+0
>records in
>> 80+0 records out
>> > 8388608000 bytes (8.4 GB) copied, 11.9928 s, 699 MB/s
>> >
>> >
>> > The dd  write test on the gluster volme inside the host is poor
>only  ~
>> 120MB/s .
>> > During the dd test, if i look at Networks->Gluster network ->Hosts
>at Tx
>> and Rx the
>> > network speed barerly reaches over  1Gbs  (~1073 Mbs) out of
>maximum of
>> 10000 Mbs.
>> >
>> >
>> >  dd if=/dev/zero of=/rhev/data-center/mnt/glu
>> sterSD/gluster1.domain.local\: _data/test
>> > bs=100M count=80 status=progress 8283750400 bytes (8.3 GB) copied,
>> 71.297942 s, 116 MB/s
>> > 80+0 records in 80+0 records out 8388608000 bytes (8.4 GB) copied,
>> 71.9545 s, 117 MB/s
>> >
>> >
>> > I have attached  my Gluster volume settings and mount options.
>> >
>> > Thanks,
>> > Emy
>> >
>> >
>> > _______________________________________________
>> > Users mailing list -- users(a)ovirt.org
>> > To unsubscribe send an email to users-leave(a)ovirt.org
>> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> > oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> > List Archives:
>> >
>>
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/7BR6TZQ4EXS.
>> ..
>> _______________________________________________
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>>
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/R5XPLZ7QARJG2TYRTOK6BADJDNONPOPQ/
>>
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H5LN65DAK4AA2ECNXOYN4Q5HXJ4F5X3G/

Reply via email to