[Gluster-users] Rebalance failing to start

2021-01-04 Thread Pat Haley


Hi,

We have a cluster whose common storage is a gluster volume consisting of 
4 bricks residing on 2 servers (more details at bottom).  I have been 
cleaning out some unneeded files and noticed that most of them came off 
one brick.  When I tried to issue a rebalance I received the following error


[root@mseas-data2 glusterfs]# gluster volume rebalance data-volume start
volume rebalance: data-volume: failed: Rebalance on data-volume is 
already started


However, when I checked the volume status I didn't see any rebalance

[root@mseas-data2 glusterfs]# gluster volume status
Status of volume: data-volume
Gluster process TCP Port RDMA Port  Online  Pid
--
Brick mseas-data2:/mnt/brick1   49154 0  Y   21269
Brick mseas-data2:/mnt/brick2   49155 0  Y   21288
Brick mseas-data3:/export/sda/brick3    49153 0  Y   19514
Brick mseas-data3:/export/sdc/brick4    49154 0  Y   19533

Task Status of Volume data-volume
--
There are no active volume tasks

But if  I use ps, I think I see a possible rebalance process

root 15984  0.3  0.1 3766880 99756 ?   Ssl   2020 734:53 
/usr/sbin/glusterfs -s localhost --volfile-id rebalance/data-volume 
--xlator-option *dht.use-readdirp=yes --xlator-option 
*dht.lookup-unhashed=yes --xlator-option *dht.assert-no-child-down=yes 
--xlator-option *replicate*.data-self-heal=off --xlator-option 
*replicate*.metadata-self-heal=off --xlator-option 
*replicate*.entry-self-heal=off --xlator-option *dht.readdir-optimize=on 
--xlator-option *dht.rebalance-cmd=5 --xlator-option 
*dht.node-uuid=c1110fd9-cb99-4ca1-b18a-536a122d67ef --xlator-option 
*dht.commit-hash=4197750498 --socket-file 
/var/run/gluster/gluster-rebalance-c162161e-2a2d-4dac-b015-f31fd89ceb18.sock 
--pid-file 
/var/lib/glusterd/vols/data-volume/rebalance/c1110fd9-cb99-4ca1-b18a-536a122d67ef.pid 
-l /var/log/glusterfs/data-volume-rebalance.log


I have 2 questions

1. Is the process I found a gluster rebalance process?
2. Is it safe to simply kill the process?  Will I need to clean-up some
   additional files (e.g. socket files associated with this process)?

Thanks

--

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley  Email:  pha...@mit.edu
Center for Ocean Engineering   Phone:  (617) 253-6824
Dept. of Mechanical EngineeringFax:(617) 253-8125
MIT, Room 5-213http://web.mit.edu/phaley/www/
77 Massachusetts Avenue
Cambridge, MA  02139-4301





Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Documentation volume heal info

2021-01-04 Thread Zenon Panoussis


I'm not sure whether here is the right place to report this,
but it won't hurt.

https://gluster.readthedocs.io/en/latest/Administrator-Guide/Managing-Volumes/

references

gluster volume heal  info healed

and

gluster volume heal  info failed

At least in version 8.3 both these arguments to info have
been removed (presumably replaced by info summary). The docs
need to be updated to reflect that.

Z





Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Replication logic

2021-01-04 Thread Gionatan Danti

Il 2021-01-03 04:48 Zenon Panoussis ha scritto:

Any ideas where I should look for the bottleneck? I can't find
anything even remotely relevant in any of the logs.


As already stated by Strahil, is the latency that is killing your setup. 
Just as I warned you before: Gluster sync replication is not suitable 
for high-latency links.


Regards.

--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.da...@assyoma.it - i...@assyoma.it
GPG public key ID: FF5F32A8




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users