Re: [Gluster-devel] self heal deamon not running

2020-07-07 Thread Emmanuel Dreyfus
Emmanuel Dreyfus  wrote:

> bidon# gluster volume heal gfs full
> Launching heal operation to perform full self heal on volume gfs has been
> unsuccessful: Self-heal daemon is not running. Check self-heal daemon log
> file.

I noticed that gluster volume heal gfs info show
Brick bidon:/export/wd2e
Status: Socket is not connected
Number of entries: -

Killing the glusterfsd process for this brick and issuing gluster volume
start gfs force managed to get me out of this situaion.


-- 
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] self heal deamon not running

2020-07-07 Thread Emmanuel Dreyfus
Hi

I have posted about several problems and not followed up yet, because each time 
I get a more troublesome one. This is aboue self-heal.

bidon# gluster volume heal gfs full
Launching heal operation to perform full self heal on volume gfs has been 
unsuccessful:
Self-heal daemon is not running. Check self-heal daemon log file.

But it is running:

bidon# gluster volume status   
Status of volume: gfs
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick bidon:/export/wd0e_tmp49153 0  Y   20795
Brick baril:/export/wd0e49152 0  Y   8525 
Brick bidon:/export/WD-WX11D741AXCA-v2  49155 0  Y   4553 
Brick baril:/export/gfs1a   49153 0  Y   7345 
Brick bidon:/export/wd2e49156 0  Y   11034
Brick baril:/export/wd2e49154 0  Y   17998
Brick bidon:/export/wd3e_tmp49157 0  Y   6547 
Brick baril:/export/wd3e_tmp49155 0  Y   1079 
Self-heal Daemon on localhost   N/A   N/AY   7329 
Self-heal Daemon on baril   N/A   N/AY   8045 
 
Task Status of Volume gfs
--
There are no active volume tasks

 /var/log/glusterfs/glustershd.log does not log any error, there is just the 
suspicious thing about duplicate entry for volfile-server

[2020-07-08 01:10:07.470068] I [glusterfsd-mgmt.c:77:mgmt_cbk_spec] 0-mgmt: 
Volume file changed
[2020-07-08 01:10:07.639752] I [glusterfsd-mgmt.c:77:mgmt_cbk_spec] 0-mgmt: 
Volume file changed
[2020-07-08 01:10:09.902087] I [glusterfsd-mgmt.c:2170:mgmt_getspec_cbk] 
0-glusterfs: Received list of available volfile servers: baril:24007 
[2020-07-08 01:10:09.902285] I [MSGID: 101221] 
[common-utils.c:3822:gf_set_volfile_server_common] 0-gluster: duplicate entry 
for volfile-server [{errno=17}, {error=File exists}] 
[2020-07-08 01:10:09.927163] I [MSGID: 0] 
[options.c:1240:xlator_option_reconf_int32] 0-gfs-client-1: option ping-timeout 
using set value 42 
[2020-07-08 01:10:09.927168] I [MSGID: 0] 
[options.c:1245:xlator_option_reconf_bool] 0-gfs-client-1: option strict-locks 
using set value off 
[2020-07-08 01:10:09.927253] I [MSGID: 0] 
[options.c:1239:xlator_option_reconf_uint32] 0-gfs-replicate-0: option 
background-self-heal-count using set value 0 
[2020-07-08 01:10:09.927268] I [MSGID: 0] 
[options.c:1245:xlator_option_reconf_bool] 0-gfs-replicate-0: option 
metadata-self-heal using set value on 
[2020-07-08 01:10:09.927280] I [MSGID: 0] 
[options.c:1236:xlator_option_reconf_str] 0-gfs-replicate-0: option 
data-self-heal using set value on 
[2020-07-08 01:10:09.927291] I [MSGID: 0] 
[options.c:1245:xlator_option_reconf_bool] 0-gfs-replicate-0: option 
entry-self-heal using set value on 
[2020-07-08 01:10:09.927323] I [MSGID: 0] 
[options.c:1245:xlator_option_reconf_bool] 0-gfs-replicate-0: option 
self-heal-daemon using set value enable 
[2020-07-08 01:10:09.927335] I [MSGID: 0] 
[options.c:1245:xlator_option_reconf_bool] 0-gfs-replicate-0: option 
iam-self-heal-daemon using set value yes 
[2020-07-08 01:10:10.051900] I [MSGID: 0] 
[options.c:1240:xlator_option_reconf_int32] 0-gfs-client-2: option ping-timeout 
using set value 42 
[2020-07-08 01:10:10.051994] I [MSGID: 0] 
[options.c:1245:xlator_option_reconf_bool] 0-gfs-client-2: option strict-locks 
using set value off 
[2020-07-08 01:10:10.076733] I [MSGID: 0] 
[options.c:1240:xlator_option_reconf_int32] 0-gfs-client-3: option ping-timeout 
using set value 42 
[2020-07-08 01:10:10.076802] I [MSGID: 0] 
[options.c:1245:xlator_option_reconf_bool] 0-gfs-client-3: option strict-locks 
using set value off 
[2020-07-08 01:10:10.076869] I [MSGID: 0] 
[options.c:1239:xlator_option_reconf_uint32] 0-gfs-replicate-1: option 
background-self-heal-count using set value 0 
[2020-07-08 01:10:10.076892] I [MSGID: 0] 
[options.c:1245:xlator_option_reconf_bool] 0-gfs-replicate-1: option 
metadata-self-heal using set value on 
[2020-07-08 01:10:10.076946] I [MSGID: 0] 
[options.c:1236:xlator_option_reconf_str] 0-gfs-replicate-1: option 
data-self-heal using set value on 
[2020-07-08 01:10:10.076959] I [MSGID: 0] 
[options.c:1245:xlator_option_reconf_bool] 0-gfs-replicate-1: option 
entry-self-heal using set value on 
[2020-07-08 01:10:10.076990] I [MSGID: 0] 
[options.c:1245:xlator_option_reconf_bool] 0-gfs-replicate-1: option 
self-heal-daemon using set value enable 
[2020-07-08 01:10:10.077005] I [MSGID: 0] 
[options.c:1245:xlator_option_reconf_bool] 0-gfs-replicate-1: option 
iam-self-heal-daemon using set value yes 
[2020-07-08 01:10:10.107449] I [MSGID: 0] 
[options.c:1240:xlator_option_reconf_int32] 0-gfs-client-4: option ping-timeout