On Wed, Nov 13, 2013 at 12:14 PM, Peter Drake wrote:
> Thanks for taking the time to look at this and reply. To clarify, the
> script that was running and created the log entries is an internal tool
> which does lots of other, unrelated things, but the part that caused the
> error takes actions v
Thanks for taking the time to look at this and reply. To clarify, the
script that was running and created the log entries is an internal tool
which does lots of other, unrelated things, but the part that caused the
error takes actions very similar to the gist. I tried to pull out the
related log
On Wed, Nov 13, 2013 at 9:01 AM, Peter Drake wrote:
> I have a replicated Gluster setup, 2 servers (fs-1 and fs-2) x 1 brick. I
> have two clients (web-1 and web-2) which are connected and simultaneously
> execute tasks. These clients mount the Gluster volume at /mnt/gfs. One
> task they execu
Hello,
according to the bug 976750
(https://bugzilla.redhat.com/show_bug.cgi?id=976750) problem with
repeating error messages:
[2013-11-13 17:16:11.94] E [socket.c:2788:socket_connect]
0-management: connection attempt failed (Connection refused)
in case when nfs is disabled on all volume
I have a replicated Gluster setup, 2 servers (fs-1 and fs-2) x 1 brick. I
have two clients (web-1 and web-2) which are connected and simultaneously
execute tasks. These clients mount the Gluster volume at /mnt/gfs. One
task they execute looks like this (note this is pseudocode, the actual task
i
Bugs should be filed at
https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
On 11/11/2013 11:24 PM, Øystein Viggen wrote:
Lalatendu Mohanty writes:
It sounds like a split brain issue. Below mentioned commands will help
you to figure this out.
gluster v heal info split-brain
glu
On 11/13/2013 06:01 AM, Ravishankar N wrote:
Currenly in glusterfs, when there is a data splt-brain (only) on a
file, we disallow the following operations from the mount-point by
returning EIO to the application: - Writes to the file (truncate, dd,
echo, cp etc) - Reads to the file (cat) - Readin
Hi everybody,
I'm Brazillian and new member in the list, and i need a little help.
Scenary:
2 servers in replicated mode (server1 and server2)
1 new server - server3 - that I'll replace by server2.
Today i has just one volume (gv0) and 2 bricks - servergv01 and server2.
I want replace server2 b
Hi,
just spotted your problem in google search, I assume you solved it in
the meanwhile :)
The problem with transport.socket.bind-address parameter is that the
glusterfs processes started by glusterd (nfs and self-heal daemons) have
hardcoded localhost address (--volfile-server/-s parameter)
Hi,
Currenly in glusterfs, when there is a data splt-brain (only) on a file,
we disallow the following operations from the mount-point by returning
EIO to the application:
- Writes to the file (truncate, dd, echo, cp etc)
- Reads to the file (cat)
- Reading extended attributes (getfattr) [1]
Lalatendu Mohanty writes:
> I am just curious about what does " gluster v heal info
> split-brain" returns when you see this issue?
"Number of entries: 0" every time.
Here's a test I did today, across the same four virtual machines with
replica 2 and a fifth virtual machine as a native gluster
11 matches
Mail list logo