hi guys.

I have a replica volume which is mounted via loopback and mounts a okey.
-> $ mount | grep VMs
127.0.0.1:/VMsy on /00-VMsy type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,allow_other,max_read=131072)

However, when something begins to operate on this fuse mounted path, then errors show:
...
The message "W [MSGID: 114031] [client-rpc-fops_v2.c:1881:client4_0_seek_cbk] 0-VMsy-client-0: remote operation failed. [{errno=6}, {error=No such device or address}]" repeated 163398 times between [2025-06-25 11:01:29.439813 +0000] and [2025-06-25 11:02:34.980956 +0000]
...
This above is a result of simple 'cp' off the fuse mount to a separate filesystem. 'cp' gets stuck forever.

However, one peer, out of there, does not suffer from this issue - this box works as expected, nicely. Now - this issue does not occur at all, on any of three boxes, if volume is mounted this way:
10.1.1.100,10.1.1.101,10.1.1.99:/VMsy

All three peers/boxes are "virtually" identical, yet there is one more "oddity" on the peer which does _not_ fail with above error, namely:
-> $ du -xh /00-VMsy//enc.vdb.back1.proxmox.qcow2
71G    /00-VMsy//enc.vdb.back1.proxmox.qcow2
where other two:
-> $ du -sh /00-VMsy//enc.vdb.back1.proxmox.qcow2
2.9G    /00-VMsy//enc.vdb.back1.proxmox.qcow2
'df' also reports that way.
71G is virtual size of qcow2 file.
I'd put aside for now - still an interesting question - mount any other way VS mount to 127.0.0.1 and ask: how to make "failing" boxes to behave like the box which is error-free?
All thoughts are much appreciated.

Volume Name: VMsy
Type: Replicate
Volume ID: b843d9ea-b500-4b4c-9f0a-f2bae507d491
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: 10.1.1.100:/devs/00.GLUSTERs/VMsy
Brick2: 10.1.1.101:/devs/00.GLUSTERs/VMsy
Brick3: 10.1.1.99:/devs/00.GLUSTERs/VMsy-arbiter (arbiter)
...

-> $ gluster --version
glusterfs 11.1
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
[email protected]
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to