Re: [Gluster-users] Fwd: bug in 3.4.1 when creating symlinks

2013-11-13 Thread Anand Avati
On Wed, Nov 13, 2013 at 12:14 PM, Peter Drake wrote: > Thanks for taking the time to look at this and reply. To clarify, the > script that was running and created the log entries is an internal tool > which does lots of other, unrelated things, but the part that caused the > error takes actions v

Re: [Gluster-users] Fwd: bug in 3.4.1 when creating symlinks

2013-11-13 Thread Peter Drake
Thanks for taking the time to look at this and reply. To clarify, the script that was running and created the log entries is an internal tool which does lots of other, unrelated things, but the part that caused the error takes actions very similar to the gist. I tried to pull out the related log

Re: [Gluster-users] Fwd: bug in 3.4.1 when creating symlinks

2013-11-13 Thread Anand Avati
On Wed, Nov 13, 2013 at 9:01 AM, Peter Drake wrote: > I have a replicated Gluster setup, 2 servers (fs-1 and fs-2) x 1 brick. I > have two clients (web-1 and web-2) which are connected and simultaneously > execute tasks. These clients mount the Gluster volume at /mnt/gfs. One > task they execu

[Gluster-users] Disabling NFS causes E level errors in nfs.log (bug 976750)

2013-11-13 Thread Emir Imamagic
Hello, according to the bug 976750 (https://bugzilla.redhat.com/show_bug.cgi?id=976750) problem with repeating error messages: [2013-11-13 17:16:11.94] E [socket.c:2788:socket_connect] 0-management: connection attempt failed (Connection refused) in case when nfs is disabled on all volume

[Gluster-users] Fwd: bug in 3.4.1 when creating symlinks

2013-11-13 Thread Peter Drake
I have a replicated Gluster setup, 2 servers (fs-1 and fs-2) x 1 brick. I have two clients (web-1 and web-2) which are connected and simultaneously execute tasks. These clients mount the Gluster volume at /mnt/gfs. One task they execute looks like this (note this is pseudocode, the actual task i

Re: [Gluster-users] Deleted files reappearing

2013-11-13 Thread Joe Julian
Bugs should be filed at https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS On 11/11/2013 11:24 PM, Øystein Viggen wrote: Lalatendu Mohanty writes: It sounds like a split brain issue. Below mentioned commands will help you to figure this out. gluster v heal info split-brain glu

Re: [Gluster-users] Fencing FOPs on data-split-brained files

2013-11-13 Thread Jeff Darcy
On 11/13/2013 06:01 AM, Ravishankar N wrote: Currenly in glusterfs, when there is a data splt-brain (only) on a file, we disallow the following operations from the mount-point by returning EIO to the application: - Writes to the file (truncate, dd, echo, cp etc) - Reads to the file (cat) - Readin

[Gluster-users] Replace brick incomplete

2013-11-13 Thread Raphael Rabelo
Hi everybody, I'm Brazillian and new member in the list, and i need a little help. Scenary: 2 servers in replicated mode (server1 and server2) 1 new server - server3 - that I'll replace by server2. Today i has just one volume (gv0) and 2 bricks - servergv01 and server2. I want replace server2 b

Re: [Gluster-users] Possible to bind to multiple addresses?

2013-11-13 Thread Emir Imamagic
Hi, just spotted your problem in google search, I assume you solved it in the meanwhile :) The problem with transport.socket.bind-address parameter is that the glusterfs processes started by glusterd (nfs and self-heal daemons) have hardcoded localhost address (--volfile-server/-s parameter)

[Gluster-users] Fencing FOPs on data-split-brained files

2013-11-13 Thread Ravishankar N
Hi, Currenly in glusterfs, when there is a data splt-brain (only) on a file, we disallow the following operations from the mount-point by returning EIO to the application: - Writes to the file (truncate, dd, echo, cp etc) - Reads to the file (cat) - Reading extended attributes (getfattr) [1]

Re: [Gluster-users] Deleted files reappearing

2013-11-13 Thread Øystein Viggen
Lalatendu Mohanty writes: > I am just curious about what does " gluster v heal info > split-brain" returns when you see this issue? "Number of entries: 0" every time. Here's a test I did today, across the same four virtual machines with replica 2 and a fifth virtual machine as a native gluster