Re: [Gluster-users] Issues with replicated gluster volume

2020-07-17 Thread Nikhil Ladha
Hi Ahemad, A few days back, you had some issue with the replicate volumes and you had a log like this ``[2020-06-16 07:19:27.418884] E [MSGID: 101097] [xlator.c:218:xlator_volopt_dynload] 0-xlator: dlsym(xlator_api) missing: /usr/lib64/glusterfs/7.5/rpc-transport/socket.so: undefined symbol: xlat

Re: [Gluster-users] Issues with replicated gluster volume

2020-06-17 Thread Karthik Subrahmanya
Hi Ahemad, Glad to hear that your problem is resolved. Thanks Strahil and Hubert for your suggestions. On Wed, Jun 17, 2020 at 12:29 PM ahemad shaik wrote: > Hi > > I tried starting and enabling the glusterfsd service suggested by Hubert > and Strahil, I see that works when one of the gluster

Re: [Gluster-users] Issues with replicated gluster volume

2020-06-16 Thread Karthik Subrahmanya
Hi Ahemad, Sorry for a lot of back and forth on this. But we might need a few more details to find the actual cause here. What version of gluster you are running on server and client nodes? Also provide the statedump [1] of the bricks and the client process when the hang is seen. [1] https://docs

Re: [Gluster-users] Issues with replicated gluster volume

2020-06-16 Thread Strahil Nikolov
In my cluster , the service is enabled and running. What actually is your problem ? When a gluster brick process dies unexpectedly - all fuse clients will be waiting for the timeout . The service glusterfsd is ensuring that during system shutdown , the brick procesees will be shutdown in suc

Re: [Gluster-users] Issues with replicated gluster volume

2020-06-16 Thread Strahil Nikolov
Hi ahemad, the script kills all gluster processes, so the clients won't wait for the timeout before switching to another node in the TSP. In CentOS/RHEL, there is a systemd service called 'glusterfsd.service' that is taking care on shutdown to kill all processes, so clients won't h

Re: [Gluster-users] Issues with replicated gluster volume

2020-06-16 Thread ahemad shaik
/usr/share/glusterfs/scripts/stop-all-gluster-processes.sh is to stop the processes ? if one of the gluster node volume goes down with some issues, we didn't do reboot or shutdown, in those cases what we need to do. Will the script/systemd service created from script will be able to notify cli

Re: [Gluster-users] Issues with replicated gluster volume

2020-06-16 Thread Strahil Nikolov
Hi Ahemad, You can simplify it by creating a systemd service that will call the script. It was already mentioned in a previous thread (with example), so you can just use it. Best Regards, Strahil Nikolov На 16 юни 2020 г. 16:02:07 GMT+03:00, Hu Bert написа: >Hi, > >if you simply re

Re: [Gluster-users] Issues with replicated gluster volume

2020-06-16 Thread Hu Bert
Hi, if you simply reboot or shutdown one of the gluster nodes, there might be a (short or medium) unavailability of the volume on the nodes. To avoid this there's script: /usr/share/glusterfs/scripts/stop-all-gluster-processes.sh (path may be different depending on distribution) If i remember co

Re: [Gluster-users] Issues with replicated gluster volume

2020-06-16 Thread Edward Berger
mount with -o backup-volfile-servers=: On Mon, Jun 15, 2020 at 1:36 PM ahemad shaik wrote: > Hi There, > > I have created 3 replica gluster volume with 3 bricks from 3 nodes. > > "gluster volume create glustervol replica 3 transport tcp node1:/data > node2:/data node3:/data force" > > mounted o

Re: [Gluster-users] Issues with replicated gluster volume

2020-06-16 Thread Karthik Subrahmanya
Hi Ahemad, The logs don't seem to indicate anything specific. Except this message in the glusterd logs, which I am not sure whether it might cause any problems [2020-06-16 07:19:27.418884] E [MSGID: 101097] [xlator.c:218:xlator_volopt_dynload] 0-xlator: dlsym(xlator_api) missing: /usr/lib64/gluste

Re: [Gluster-users] Issues with replicated gluster volume

2020-06-16 Thread ahemad shaik
Sorry, It was a typo. The command i exact command i have used is below. The volume is mounted on node4. ""mount -t glusterfs node1:/glustervol /mnt/ "" gluster Volume is created from node1,node2 and node3.  ""gluster volume create glustervol replica 3 transport tcp node1:/data node2:/data node3

Re: [Gluster-users] Issues with replicated gluster volume

2020-06-16 Thread ahemad shaik
Hi Karthik, Please provide the following info, I see there are errors unable to connect to port and warning that transport point end connected. Please find the complete logs below. kindly suggest. 1. gluster peer status gluster peer statusNumber of Peers: 2 Hostname: node1Uuid: 0e679115-15ad-4a

Re: [Gluster-users] Issues with replicated gluster volume

2020-06-15 Thread Karthik Subrahmanya
Hi, Thanks for the clarification. In that case can you attach complete glusterd, bricks and mount logs from all the nodes when this happened. Also paste the output that you are seeing when you try to access or do operations on the mount point. Regards, Karthik On Tue, Jun 16, 2020 at 11:55 AM ah

Re: [Gluster-users] Issues with replicated gluster volume

2020-06-15 Thread Karthik Subrahmanya
Hi Ahemad, A quick question on the mount command that you have used "mount -t glusterfs node4:/glustervol/mnt/" Here you are specifying the hostname as node4 instead of node{1,2,3} which actually host the volume that you intend to mount. Is this a typo or did you paste the same command that yo

Re: [Gluster-users] Issues with replicated gluster volume

2020-06-15 Thread Karthik Subrahmanya
Hi Ahemad, Please provide the following info: 1. gluster peer status 2. gluster volume info glustervol 3. gluster volume status glustervol 4. client log from node4 when you saw unavailability Regards, Karthik On Mon, Jun 15, 2020 at 11:07 PM ahemad shaik wrote: > Hi There, > > I have created 3