Re: [Gluster-users] Need help in understanding volume heal-info behavior

2014-04-28 Thread Chalcogen
Thank you very much! On Monday 28 April 2014 07:41 AM, Ravishankar N wrote: On 04/28/2014 01:30 AM, Chalcogen wrote: Hi everyone, I have trouble understanding the following behavior: Suppose I have a replica 2 volume 'testvol' on two servers, server1 and server2, composed of serve

[Gluster-users] Need help in understanding volume heal-info behavior

2014-04-27 Thread Chalcogen
Hi everyone, I have trouble understanding the following behavior: Suppose I have a replica 2 volume 'testvol' on two servers, server1 and server2, composed of server1:/bricks/testvol/brick and server2:/bricks/testvol/brick. Also, suppose it contains a good number of files. Now, assume I rem

Re: [Gluster-users] Command "/etc/init.d/glusterd start" failed

2014-04-19 Thread Chalcogen
I have been plagued by errors of this kind every so often, mainly because we are in a development phase and we reboot our servers so frequently. If you start glusterd in debug mode: sh$ glusterd --debug you can easily pinpoint exactly which volume/peer data is causing the initialization failu

Re: [Gluster-users] One node goes offline, the other node loses its connection to its local Gluster volume

2014-02-23 Thread Chalcogen
I'm not from the glusterfs development team or anything, but I, too started with glusterfs somewhere around the time frame you mention, and also work with a twin-replicated setup just like yours. When I do what you indicate here on my setup, the command initially hangs, and on both servers for

[Gluster-users] Failed cleanup on peer probe tmp file causes volume re-initialization problems

2014-02-20 Thread Chalcogen
Hi everybody, This is more of a part of a larger wishlist: I found out that when a peer probe is performed by the user, mgmt/glusterd write a file named after the hostname of the peer in question. On successful probes, this file is replaced with a file named after the UUID of the glusterd ins

[Gluster-users] File (setuid) permission changes during volume heal - possible bug?

2014-01-27 Thread Chalcogen
Hi, I am working on a twin-replicated setup (server1 and server2) with glusterfs 3.4.0. I perform the following steps: 1. Create a distributed volume 'testvol' with the XFS brick server1:/brick/testvol on server1, and mount it using the glusterfs native client at /testvol. 2. I copy the

Re: [Gluster-users] Passing noforget option to glusterfs native client mounts

2013-12-18 Thread Chalcogen
P.s. I think I need to clarify this: I am only reading from the mounts, and not modifying anything on the server. and so the commonest causes on stale file handles do not appy. Anirban On Thursday 19 December 2013 01:16 AM, Chalcogen wrote: Hi everybody, A few months back I joined a

[Gluster-users] Passing noforget option to glusterfs native client mounts

2013-12-18 Thread Chalcogen
Hi everybody, A few months back I joined a project where people want to replace their legacy fuse-based (twin-server) replicated file-system with GlusterFS. They also have a high-availability NFS server code tagged with the kernel NFSD that they would wish to retain (the nfs-kernel-server, I