Hello all,

I am using Ovirt 4.3.10 on Centos 7.8 with glusterfs 6.9 .
My Gluster setup is of 3 hosts in replica 3 (2 hosts + 1 arbiter).
All the 3 hosts are Dell R720  with Perc Raid Controller H710 mini(that has
maximim throughtout 6Gbs)  and  with 2×1TB samsumg SSD in RAID 0. The
volume is partitioned using LVM thin provision and formated XFS.
The hosts have separate 10GE network cards for storage traffic.
The Gluster Network is connected to this 10GE network cards and is mounted
using Fuse Glusterfs(NFS is disabled).Also Migration Network is activated
on the same storage network.


The problem is that the 10GE network is not used at full potential by the
Gluster.
If i do live Migration of Vms i can see speeds of 7GB/s ~ 9GB/s.
The same network tests using iperf3 reported 9.9GB/s ,  these exluding the
network setup as a bottleneck(i will not paste all the iperf3 tests here
for now).
I did not enable all the Volume options  from "Optimize for Virt Store",
because of the bug that cant set volume  cluster.granural-heal to
enable(this was fixed in vdsm-4
40, but that is working only on Centos 8 with ovirt 4.4 ) .
i whould be happy to know what are all these "Optimize for Virt Store"
options, so i can set them manually.


The speed on the disk inside the host using dd is b etween 1GB/s to 700Mbs.


[root@host1 ~]# dd if=/dev/zero of=test bs=100M count=40 cou nt=80
status=progress 8074035200 bytes (8.1 GB) copied, 11.059372 s, 730 MB/s
80+0 records in 80+0 records out 8388608000 bytes (8.4 GB) copied, 11.9928
s, 699 MB/s


The dd  write test on the gluster volme inside the host is poor only  ~
120MB/s .
During the dd test, if i look at Networks->Gluster network ->Hosts at Tx
and Rx the network speed barerly reaches over  1Gbs  (~1073 Mbs) out of
maximum of 10000 Mbs.


 dd if=/dev/zero of=/rhev/data-center/mnt/glu
sterSD/gluster1.domain.local\:_data/test
bs=100M count=80 status=progress 8283750400 bytes (8.3 GB) copied,
71.297942 s, 116 MB/s 80+0 records in 80+0 records out 8388608000 bytes
(8.4 GB) copied, 71.9545 s, 117 MB/s


I have attached  my Gluster volume settings and mount options.

Thanks,
Emy
DD write tests




[root@host1 ~]# dd if=/dev/zero of=test bs=100M
nt=40 status=progress
3460300800 bytes (3.5 GB) copied, 3.159304 s, 1.1 GB/s
40+0 records in
40+0 records out
4194304000 bytes (4.2 GB) copied, 3.65431 s, 1.1 GB/s
root@host1:~[roodd if=/dev/zero of=/rhev/data-center/mnt/glu
sterSD/gluster1.domain.local\:_data/test  bs=100M count=40
 status=progress                                           
4194304000 bytes (4.2 GB) copied, 24.316613 s, 172 MB/s
40+0 records in
40+0 records out
4194304000 bytes (4.2 GB) copied, 24.3866 s, 172 MB/s










Volume Name: data
Type: Replicate
Volume ID: 03143dca-b709-4065-b396-bea1923f4821
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1.domain.local:/gluster_bricks/data/data
Brick2: gluster2.domain.local:/gluster_bricks/data/data
Brick3: gluster3.domain.local:/gluster_bricks/data/data (a
rbiter)
Options Reconfigured:
performance.strict-o-direct: on
performance.client-io-threads: on
cluster.shd-max-threads: 8
performance.low-prio-threads: 32
performance.flush-behind: on
performance.write-behind: on
cluster.read-hash-mode: 3
features.shard-block-size: 64MB
server.allow-insecure: on
storage.fips-mode-rchecksum: on
server.event-threads: 4
client.event-threads: 4
performance.write-behind-window-size: 16MB
performance.io-thread-count: 32
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
cluster.choose-local: off
network.ping-timeout: 30
storage.owner-uid: 36
storage.owner-gid: 36
cluster.granular-entry-heal: enable
network.inode-lru-limit: 16384
config.transport: tcp
client.bind-insecure: on
cluster.enable-shared-storage: disable






Mount options for data volume : 


/dev/mapper/gluster_vg_sda3-gluster_lv_data on /gluster_bric
ks/data type xfs (rw,noatime,nodiratime,seclabel,attr2,inode
64,logbsize=256k,sunit=512,swidth=512,noquota)
gluster1.domain.local:/data on /rhev/data-center/mnt/glust
erSD/gluster1.domain.local:_data type fuse.glusterfs (rw,r
elatime,user_id=0,group_id=0,default_permissions,allow_other
,max_read=131072)






Gluster volume data all




root@host1:~[root@host1 ~]# gluster volume get data all
Option                                  Value


------                                  -----


cluster.lookup-unhashed                 on


cluster.lookup-optimize                 on


cluster.min-free-disk                   10%


cluster.min-free-inodes                 5%
                    
cluster.rebalance-stats                 off
                    
cluster.subvols-per-directory           (null)


cluster.readdir-optimize                off


cluster.rsync-hash-regex                (null)


cluster.extra-hash-regex                (null)


cluster.dht-xattr-name                  trusted.glusterfs.dh
t
cluster.randomize-hash-range-by-gfid    off


cluster.rebal-throttle                  normal


cluster.lock-migration                  off


cluster.force-migration                 off


cluster.local-volume-name               (null)


cluster.weighted-rebalance              on


cluster.switch-pattern                  (null)


cluster.entry-change-log                on


cluster.read-subvolume                  (null)


cluster.read-subvolume-index            -1


cluster.read-hash-mode                  3


cluster.background-self-heal-count      8


cluster.metadata-self-heal              off


cluster.data-self-heal                  off


cluster.entry-self-heal                 off


cluster.self-heal-daemon                on


cluster.heal-timeout                    600


cluster.self-heal-window-size           1


cluster.data-change-log                 on


cluster.metadata-change-log             on


cluster.data-self-heal-algorithm        full


cluster.eager-lock                      enable


disperse.eager-lock                     on


disperse.other-eager-lock               on


disperse.eager-lock-timeout             1


disperse.other-eager-lock-timeout       1


cluster.quorum-type                     auto


cluster.quorum-count                    (null)


cluster.choose-local                    off


cluster.self-heal-readdir-size          1KB


cluster.post-op-delay-secs              1


cluster.ensure-durability               on


cluster.consistent-metadata             no


cluster.heal-wait-queue-length          128


cluster.favorite-child-policy           none


cluster.full-lock                       yes


diagnostics.latency-measurement         off


diagnostics.dump-fd-stats               off


diagnostics.count-fop-hits              off


diagnostics.brick-log-level             INFO


diagnostics.client-log-level            INFO


diagnostics.brick-sys-log-level         CRITICAL


diagnostics.client-sys-log-level        CRITICAL


diagnostics.brick-logger                (null)


diagnostics.client-logger               (null)


diagnostics.brick-log-format            (null)


diagnostics.client-log-format           (null)


diagnostics.brick-log-buf-size          5


diagnostics.client-log-buf-size         5


diagnostics.brick-log-flush-timeout     120


diagnostics.client-log-flush-timeout    120


diagnostics.stats-dump-interval         0


diagnostics.fop-sample-interval         0


diagnostics.stats-dump-format           json


diagnostics.fop-sample-buf-size         65535


diagnostics.stats-dnscache-ttl-sec      86400


performance.cache-max-file-size         0


performance.cache-min-file-size         0


performance.cache-refresh-timeout       1


performance.cache-priority


performance.cache-size                  32MB


performance.io-thread-count             32


performance.high-prio-threads           16


performance.normal-prio-threads         16


performance.low-prio-threads            32


performance.least-prio-threads          1


performance.enable-least-priority       on


performance.iot-watchdog-secs           (null)


performance.iot-cleanup-disconnected-reqsoff


performance.iot-pass-through            false


performance.io-cache-pass-through       false


performance.cache-size                  128MB


performance.qr-cache-timeout            1


performance.cache-invalidation          false


performance.ctime-invalidation          false


performance.flush-behind                on


performance.nfs.flush-behind            on


performance.write-behind-window-size    16MB


performance.resync-failed-syncs-after-fsyncoff


performance.nfs.write-behind-window-size1MB


performance.strict-o-direct             on


performance.nfs.strict-o-direct         off


performance.strict-write-ordering       off


performance.nfs.strict-write-ordering   off


performance.write-behind-trickling-writeson


performance.aggregate-size              128KB


performance.nfs.write-behind-trickling-writeson


performance.lazy-open                   yes


performance.read-after-open             yes


performance.open-behind-pass-through    false


performance.read-ahead-page-count       4


performance.read-ahead-pass-through     false


performance.readdir-ahead-pass-through  false


performance.md-cache-pass-through       false


performance.md-cache-timeout            1


performance.cache-swift-metadata        true


performance.cache-samba-metadata        false


performance.cache-capability-xattrs     true


performance.cache-ima-xattrs            true


performance.md-cache-statfs             off


performance.xattr-cache-list


performance.nl-cache-pass-through       false


features.encryption                     off


network.frame-timeout                   1800


network.ping-timeout                    30


network.tcp-window-size                 (null)


client.ssl                              off


network.remote-dio                      off


client.event-threads                    4


client.tcp-user-timeout                 0


client.keepalive-time                   20


client.keepalive-interval               2


client.keepalive-count                  9


network.tcp-window-size                 (null)


network.inode-lru-limit                 16384


auth.allow                              *


auth.reject                             (null)


transport.keepalive                     1


server.allow-insecure                   on


server.root-squash                      off


server.all-squash                       off


server.anonuid                          65534


server.anongid                          65534


server.statedump-path                   /var/run/gluster


server.outstanding-rpc-limit            64


server.ssl                              off


auth.ssl-allow                          *


server.manage-gids                      off


server.dynamic-auth                     on


client.send-gids                        on


server.gid-timeout                      300


server.own-thread                       (null)


server.event-threads                    4


server.tcp-user-timeout                 42


server.keepalive-time                   20


server.keepalive-interval               2


server.keepalive-count                  9


transport.listen-backlog                1024


transport.address-family                inet


performance.write-behind                on


performance.read-ahead                  off


performance.readdir-ahead               on


performance.io-cache                    off


performance.open-behind                 on


performance.quick-read                  off


performance.nl-cache                    off


performance.stat-prefetch               on


performance.client-io-threads           on


performance.nfs.write-behind            on


performance.nfs.read-ahead              off


performance.nfs.io-cache                off


performance.nfs.quick-read              off


performance.nfs.stat-prefetch           off


performance.nfs.io-threads              off


performance.force-readdirp              true


performance.cache-invalidation          false


performance.global-cache-invalidation   true


features.uss                            off


features.snapshot-directory             .snaps


features.show-snapshot-directory        off


features.tag-namespaces                 off


network.compression                     off


network.compression.window-size         -15


network.compression.mem-level           8


network.compression.min-size            0


network.compression.compression-level   -1


network.compression.debug               false


features.default-soft-limit             80%


features.soft-timeout                   60


features.hard-timeout                   5


features.alert-time                     86400


features.quota-deem-statfs              off


geo-replication.indexing                off


geo-replication.indexing                off


geo-replication.ignore-pid-check        off


geo-replication.ignore-pid-check        off


features.quota                          off


features.inode-quota                    off


features.bitrot                         disable


debug.trace                             off


debug.log-history                       no


debug.log-file                          no


debug.exclude-ops                       (null)


debug.include-ops                       (null)


debug.error-gen                         off


debug.error-failure                     (null)


debug.error-number                      (null)


debug.random-failure                    off


debug.error-fops                        (null)


nfs.disable                             on


features.read-only                      off


features.worm                           off


features.worm-file-level                off


features.worm-files-deletable           on


features.default-retention-period       120


features.retention-mode                 relax


features.auto-commit-period             180


storage.linux-aio                       off


storage.batch-fsync-mode                reverse-fsync


storage.batch-fsync-delay-usec          0


storage.owner-uid                       36


storage.owner-gid                       36


storage.node-uuid-pathinfo              off


storage.health-check-interval           30


storage.build-pgfid                     off


storage.gfid2path                       on


storage.gfid2path-separator             :


storage.reserve                         1


storage.health-check-timeout            10


storage.fips-mode-rchecksum             on


storage.force-create-mode               0000


storage.force-directory-mode            0000


storage.create-mask                     0777


storage.create-directory-mask           0777


storage.max-hardlinks                   100


features.ctime                          on


config.transport                        tcp


config.gfproxyd                         off


cluster.server-quorum-type              server


cluster.server-quorum-ratio             0


changelog.changelog                     off


changelog.changelog-dir                 {{ brick.path }}/.gl
usterfs/changelogs
changelog.encoding                      ascii


changelog.rollover-time                 15


changelog.fsync-interval                5


changelog.changelog-barrier-timeout     120


changelog.capture-del-path              off


features.barrier                        disable


features.barrier-timeout                120


features.trash                          off


features.trash-dir                      .trashcan


features.trash-eliminate-path           (null)


features.trash-max-filesize             5MB


features.trash-internal-op              off


cluster.enable-shared-storage           disable


locks.trace                             off


locks.mandatory-locking                 off


cluster.disperse-self-heal-daemon       enable


cluster.quorum-reads                    no


client.bind-insecure                    on


features.shard                          on


features.shard-block-size               64MB


features.shard-lru-limit                16384


features.shard-deletion-rate            100


features.scrub-throttle                 lazy


features.scrub-freq                     biweekly


features.scrub                          false


features.expiry-time                    120


features.cache-invalidation             off


features.cache-invalidation-timeout     60


features.leases                         off


features.lease-lock-recall-timeout      60


disperse.background-heals               8


disperse.heal-wait-qlength              128


cluster.heal-timeout                    600


dht.force-readdirp                      on


disperse.read-policy                    gfid-hash


cluster.shd-max-threads                 8


cluster.shd-wait-qlength                10000


cluster.locking-scheme                  granular


cluster.granular-entry-heal             enable


features.locks-revocation-secs          0


features.locks-revocation-clear-all     false


features.locks-revocation-max-blocked   0


features.locks-monkey-unlocking         false


features.locks-notify-contention        no


features.locks-notify-contention-delay  5


disperse.shd-max-threads                1


disperse.shd-wait-qlength               1024


disperse.cpu-extensions                 auto


disperse.self-heal-window-size          1


cluster.use-compound-fops               off


performance.parallel-readdir            off


performance.rda-request-size            131072


performance.rda-low-wmark               4096


performance.rda-high-wmark              128KB


performance.rda-cache-limit             10MB


performance.nl-cache-positive-entry     false


performance.nl-cache-limit              10MB


performance.nl-cache-timeout            60


cluster.brick-multiplex                 off


cluster.max-bricks-per-process          250


disperse.optimistic-change-log          on


disperse.stripe-cache                   4


cluster.halo-enabled                    False


cluster.halo-shd-max-latency            99999


cluster.halo-nfsd-max-latency           5


cluster.halo-max-latency                5


cluster.halo-max-replicas               99999


cluster.halo-min-replicas               2


features.selinux                        on


cluster.daemon-log-level                INFO


debug.delay-gen                         off


delay-gen.delay-percentage              10%


delay-gen.delay-duration                100000


delay-gen.enable


disperse.parallel-writes                on


features.sdfs                           off


features.cloudsync                      off


features.ctime                          on


ctime.noatime                           on


feature.cloudsync-storetype             (null)


features.enforce-mandatory-lock         off


root@host1:~[root@host1 ~]#
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7BR6TZQ4EXS4SIEHTZN2WJUMBYZHP5GJ/

Reply via email to