Hey David,
Did you get any reply on this thread ?Did you manage to identify the problem ?
Best Rega,Strahil Nikolov 

On Wednesday, April 19, 2023, 4:35 AM, David Cunningham 
<dcunning...@voisonics.com> wrote:

Hello,
I tried reporting this in issue #4097 but got no response. Following on from 
issue #1741 and #3498, we were experiencing very slow response time accessing 
files on a GlusterFS 9.6 system on Ubuntu 18.04 server. The server in question 
is both a GlusterFS node and client.

Listing directory contents via the FUSE mount typically took 2-10 seconds, 
whereas a different client was fast. In mnt-glusterfs.log we saw lots of 
warnings like this:

[2023-04-03 20:16:14.789588 +0000] W [fuse-bridge.c:310:check_and_dump_fuse_W] 
0-glusterfs-fuse: writing to fuse device yielded ENOENT 256 times

After running "echo 3 > /proc/sys/vm/drop_caches" as suggested in issue #1471 
the response improved dramatically, to around 0.009s, the same as the other 
client.

Can you please advise how we should tune GlusterFS to avoid this problem? I see 
mention of the --lru-limit and --invalidate-limit options in that issue, but to 
be honest I don't understand how to use the warning messages to decide on a 
suitable value for those options. Thanks in advance.

Here are the GlusterFS details:

root@br:~# gluster volume info
 
Volume Name: gvol0
Type: Replicate
Volume ID: 2d2c1552-bc93-4c91-b8ca-73553f00fdcd
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: br:/nodirectwritedata/gluster/gvol0
Brick2: sg:/nodirectwritedata/gluster/gvol0
Options Reconfigured:
cluster.min-free-disk: 20%
network.ping-timeout: 10
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
storage.health-check-interval: 0
cluster.server-quorum-ratio: 50
root@br:~# 
root@br:~# gluster volume status
Status of volume: gvol0
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick br:/nodirectwritedata/gluster/gvol0   49152     0          Y       4761 
Brick sg:/nodirectwritedata/gluster/gvol0   49152     0          Y       2329 
Self-heal Daemon on localhost               N/A       N/A        Y       5304 
Self-heal Daemon on sg                      N/A       N/A        Y       2629 
 
Task Status of Volume gvol0
------------------------------------------------------------------------------
There are no active volume tasks
 
root@br:~# 
root@br:~# gluster volume heal gvol0 info summary
Brick br:/nodirectwritedata/gluster/gvol0
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0

Brick sg:/nodirectwritedata/gluster/gvol0
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0

Thank you,
--David Cunningham, Voisonics Limited
http://voisonics.com/
USA: +1 213 221 1092
New Zealand: +64 (0)28 2558 3782________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users



________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to