Hi,
why is it that the free space on the volume mount (via FUSE) is never
equivalent to any of the bricks' free space in a replicated volume?
For example here:
$ df -m on host-01:
> /dev/mapper/gluster-brick01 7630880 4890922 2722008 65%
> /data/glusterfs/backupv01/brick01
$ df -m on host-0
Hi,
using GlusterFS 6.7 with a 2+1 replicated volume, how are read requests load
balanced, if at all?
The reason I ask is that we see consistently higher usage/load on the first
brick when compared to the second data brick.
Are there any parameters to influence the read balancing?
Thanks,
BEGIN:VCALENDAR
PRODID:-//Google Inc//Google Calendar 70.9054//EN
VERSION:2.0
CALSCALE:GREGORIAN
METHOD:REQUEST
BEGIN:VEVENT
DTSTART:20200128T06Z
DTEND:20200128T07Z
DTSTAMP:20200127T053000Z
ORGANIZER;CN=ypa...@redhat.com:mailto:ypa...@redhat.com
UID:03jdvcminjelci8v617aqjk...@google.com
ATT
> One last reply to myself.
One of the test cases my test scripts triggered turned out to actually
be due to my NFS RW mount options.
OLD RW NFS mount options:
"rw,noatime,nocto,actimeo=3600,lookupcache=all,nolock,tcp,vers=3"
NEW options that work better
rw,noatime,nolock,tcp,vers=3"
I had copi