I tried to reproduce the problem - works fine with the development master 
branch (upcoming Monit 5.36.0).

Is there only direct mountpoint, or some overlay mounts? Is it mounted only 
once?


I have tested both check variants (monitoring the same filesystem twice):

1. using mountpoint:

check filesystem glusterfs path /glusterfs

2. using device:

check filesystem glusterfs_via_device path "glusterfs1:/vol01"


debug mode output:

'glusterfs' succeeded getting filesystem statistics for '/glusterfs'
'glusterfs' filesystem flags has not changed [current flags 
'rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072']

'glusterfs_via_device' succeeded getting filesystem statistics for 
'glusterfs1:/vol01'
'glusterfs_via_device' filesystem flags has not changed [current flags 
'rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072']

monit status:

# monit status
Monit 5.36.0 uptime: 1m

...

Filesystem 'glusterfs'
  status                       OK
  monitoring status            Monitored
  monitoring mode              active
  on reboot                    start
  filesystem type              fuse.glusterfs
  filesystem flags             
rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072
  permission                   755
  uid                          0
  gid                          0
  block size                   4 kB
  space total                  5.2 TB (of which 4.1% is reserved for root user)
  space free for non superuser 2.3 TB [43.8%]
  space free total             2.5 TB [47.9%]
  inodes total                 357040128
  inodes free                  351580086 [98.5%]
  data collected               Thu, 13 Nov 2025 11:42:44

Filesystem 'glusterfs_via_device'
  status                       OK
  monitoring status            Monitored
  monitoring mode              active
  on reboot                    start
  filesystem type              fuse.glusterfs
  filesystem flags             
rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072
  permission                   755
  uid                          0
  gid                          0
  block size                   4 kB
  space total                  5.2 TB (of which 4.1% is reserved for root user)
  space free for non superuser 2.3 TB [43.8%]
  space free total             2.5 TB [47.9%]
  inodes total                 357040128
  inodes free                  351580086 [98.5%]
  data collected               Thu, 13 Nov 2025 11:42:44


mounts:

# cat /proc/mounts 
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
udev /dev devtmpfs 
rw,nosuid,relatime,size=1669380k,nr_inodes=417345,mode=755,inode64 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=600,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,nodev,noexec,relatime,size=343572k,mode=755,inode64 
0 0
/dev/mapper/ubuntu--vg-ubuntu--lv / ext4 rw,relatime 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev,inode64 0 0
cgroup2 /sys/fs/cgroup cgroup2 
rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot 0 0
none /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
bpf /sys/fs/bpf bpf rw,nosuid,nodev,noexec,relatime,mode=700 0 0
systemd-1 /proc/sys/fs/binfmt_misc autofs 
rw,relatime,fd=36,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=11776 
0 0
hugetlbfs /dev/hugepages hugetlbfs rw,nosuid,nodev,relatime,pagesize=2M 0 0
debugfs /sys/kernel/debug debugfs rw,nosuid,nodev,noexec,relatime 0 0
mqueue /dev/mqueue mqueue rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k,inode64 0 0
tmpfs /tmp tmpfs rw,nosuid,nodev,nr_inodes=1048576,inode64 0 0
tracefs /sys/kernel/tracing tracefs rw,nosuid,nodev,noexec,relatime 0 0
fusectl /sys/fs/fuse/connections fusectl rw,nosuid,nodev,noexec,relatime 0 0
configfs /sys/kernel/config configfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /run/credentials/systemd-resolved.service tmpfs 
ro,nosuid,nodev,noexec,relatime,nosymfollow,size=1024k,nr_inodes=1024,mode=700,inode64,noswap
 0 0
/dev/sda2 /boot ext4 rw,relatime 0 0
/dev/loop0 /snap/core20/2669 squashfs 
ro,nodev,relatime,errors=continue,threads=single 0 0
/dev/loop3 /snap/snapd/25202 squashfs 
ro,nodev,relatime,errors=continue,threads=single 0 0
/dev/loop1 /snap/prometheus/86 squashfs 
ro,nodev,relatime,errors=continue,threads=single 0 0
/dev/loop4 /snap/snapd/25577 squashfs 
ro,nodev,relatime,errors=continue,threads=single 0 0
tmpfs /run/credentials/systemd-networkd.service tmpfs 
ro,nosuid,nodev,noexec,relatime,nosymfollow,size=1024k,nr_inodes=1024,mode=700,inode64,noswap
 0 0
binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc 
rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /run/snapd/ns tmpfs 
rw,nosuid,nodev,noexec,relatime,size=343572k,mode=755,inode64 0 0
nsfs /run/snapd/ns/prometheus.mnt nsfs rw 0 0
tmpfs /run/credentials/[email protected] tmpfs 
ro,nosuid,nodev,noexec,relatime,nosymfollow,size=1024k,nr_inodes=1024,mode=700,inode64,noswap
 0 0
tmpfs /run/credentials/systemd-journald.service tmpfs 
ro,nosuid,nodev,noexec,relatime,nosymfollow,size=1024k,nr_inodes=1024,mode=700,inode64,noswap
 0 0
/dev/loop5 /snap/core20/2682 squashfs 
ro,nodev,relatime,errors=continue,threads=single 0 0
tmpfs /run/user/1000 tmpfs 
rw,nosuid,nodev,relatime,size=343568k,nr_inodes=85892,mode=700,uid=1000,gid=1000,inode64
 0 0
glusterfs1:/vol01 /glusterfs fuse.glusterfs 
rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072
 0 0


Best regards,
Martin


> On 11. 11. 2025, at 9:18, [email protected] wrote:
> 
> We didn't test with glusterfs-fuse mount points before. We'll try to 
> reproduce the problem.
> 
> Please can you try to change the test this way?:
> 
>       check filesystem PEOPLE_SYSDATA path "10.1.1.99:/PEOPLE_SYSDATA"
>         start program = "/usr/bin/systemctl start PEOPLE_SYSDATA.mount"
>         stop program  = "/usr/bin/systemctl stop PEOPLE_SYSDATA.mount"
> 
> 
> Best regards,
> Martin
> 
> 
>> On 10. 11. 2025, at 11:24, lejeczek via This is the general mailing list for 
>> monit <[email protected]> wrote:
>> 
>> I tried that, it did not tell me much, I did not get it.
>> 
>> -> $ monit -vI 2>&1 | __grepColorIt '0:0/stat'
>> ...
>> Reloading mount information for filesystem '/PEOPLE_SYSDATA'
>> filesystem statistic error: cannot read /sys/dev/block/0:0/stat -- No such 
>> file or directory
>> 'PEOPLE_SYSDATA' succeeded getting filesystem statistics for 
>> '/PEOPLE_SYSDATA'
>> ...
>> Reloading mount information for filesystem '/PEOPLE_SYSDATA'
>> filesystem statistic error: cannot read /sys/dev/block/0:0/stat -- No such 
>> file or directory
>> 'PEOPLE_SYSDATA' succeeded getting filesystem statistics for 
>> '/PEOPLE_SYSDATA'
>> ...
>> 
>> it shows up in the company of /PEOPLE_SYSDATA but that is - but I have a few 
>> of those - a glusterfs-fuse mount point:
>> 
>> 10.1.1.99:/PEOPLE_SYSDATA /PEOPLE_SYSDATA fuse.glusterfs 
>> rw,relatime,user_id=0,group_id=0,allow_other,max_read=131072 0 0
>> 
>> Here is monit for it:
>> 
>> check filesystem PEOPLE_SYSDATA path /PEOPLE_SYSDATA
>>   start program = "/usr/bin/systemctl start PEOPLE_SYSDATA.mount"
>>   stop program  = "/usr/bin/systemctl stop PEOPLE_SYSDATA.mount"
>> 
>> monit is 5.33.0
>> What am I missing?
>> thanks, L.
> 

Reply via email to