[Gluster-users] Free space reported not consistent with bricks

2020-01-26 Thread Stefan
Hi, why is it that the free space on the volume mount (via FUSE) is never equivalent to any of the bricks' free space in a replicated volume? For example here: $ df -m on host-01: > /dev/mapper/gluster-brick01 7630880 4890922 2722008 65% > /data/glusterfs/backupv01/brick01 $ df -m on host-0

[Gluster-users] Replicated volume load balancing

2020-01-26 Thread Stefan
Hi, using GlusterFS 6.7 with a 2+1 replicated volume, how are read requests load balanced, if at all? The reason I ask is that we see consistently higher usage/load on the first brick when compared to the second data brick. Are there any parameters to influence the read balancing? Thanks,

[Gluster-users] Invitation: Invitation: Invitation: GlusterFS community meeting @ Tue... @ Tue Jan 28, 2020 11:30am - 12:30pm (IST) (gluster-users@gluster.org)

2020-01-26 Thread ypadia
BEGIN:VCALENDAR PRODID:-//Google Inc//Google Calendar 70.9054//EN VERSION:2.0 CALSCALE:GREGORIAN METHOD:REQUEST BEGIN:VEVENT DTSTART:20200128T06Z DTEND:20200128T07Z DTSTAMP:20200127T053000Z ORGANIZER;CN=ypa...@redhat.com:mailto:ypa...@redhat.com UID:03jdvcminjelci8v617aqjk...@google.com ATT

Re: [Gluster-users] gluster NFS hang observed mounting or umounting at scale

2020-01-26 Thread Erik Jacobson
> One last reply to myself. One of the test cases my test scripts triggered turned out to actually be due to my NFS RW mount options. OLD RW NFS mount options: "rw,noatime,nocto,actimeo=3600,lookupcache=all,nolock,tcp,vers=3" NEW options that work better rw,noatime,nolock,tcp,vers=3" I had copi