The problem occured on slave side whose error is propagated to master.
Mostly any traceback with repce involved is related to problem in slave.
Just check few lines above in the log to find the slave node, the crashed
worker is connected to and get geo replication logs to further debug.
On
Hi,
I have a replicate x3 volume with the following config:
```
Volume Name: gvol1
Type: Replicate
Volume ID: 384acec2-5b5f-40da-bf0e-5c53d12b3ae2
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: vm0:/srv/brick1/gvol1
Brick2: vm1:/srv/brick1/gvol1
You can ignore this error. It is fixed and should be available in next
4.1.x release.
On Sat, 22 Sep 2018, 07:07 Pedro Costa, wrote:
> Forgot to mention, I’m running all VM’s with 16.04.1-Ubuntu, Kernel
> 4.15.0-1023-azure #24
>
>
>
>
>
> *From:* Pedro Costa
> *Sent:* 21 September 2018 10:16
Forgot to mention, I'm running all VM's with 16.04.1-Ubuntu, Kernel
4.15.0-1023-azure #24
From: Pedro Costa
Sent: 21 September 2018 10:16
To: 'gluster-users@gluster.org'
Subject: posix set mdata failed, No ctime
Hi,
I have a replicate x3 volume with the following config:
```
Volume Name:
On Fri, Sep 21, 2018 at 12:44 AM Chaloulos, Klearchos (Nokia - GR/Athens) <
klearchos.chalou...@nokia.com> wrote:
> Hello,
>
> We are using glusterfs version 3.7.14, and have deployed the glusterfs
> servers in containers. We are trying to use the “gluster volume add-brick”
> command to extend a
Hi,
Any idea how to troubleshoot this?
New folders and files were created on the master and the replication went
faulty. They were created via Samba.
Version: GlusterFS 4.1.3
[root@master]# gluster volume geo-replication status
MASTER NODE MASTER VOL MASTER BRICK
On Thu, 2018-09-20 at 14:58 -0600, Terry McGuire wrote:
> > On Sep 19, 2018, at 06:37, Anoop C S wrote:
> >
> > On Wed, 2018-09-12 at 10:37 -0600, Terry McGuire wrote:
> > > > Can you please attach the output of `testparm -s` so as to look through
> > > > how Samba is setup?
> >
> > I have a
Hi Sanju,
Here is the output of 't a a bt'
(gdb) t a a bt
Thread 7 (LWP 444):
#0 0x3fff7a4d4ccc in __pthread_cond_timedwait (cond=0x10059a98,
mutex=0x10059a70, abstime=0x3fff77a50670) at pthread_cond_timedwait.c:198
#1 0x3fff7a5f1e74 in syncenv_task (proc=0x10053eb0) at syncop.c:607
Hi Abhishek,
Can you please share the output of "t a a bt" with us?
Thanks,
Sanju
On Fri, Sep 21, 2018 at 2:55 PM, ABHISHEK PALIWAL
wrote:
>
> We have seen a SIGSEGV crash on glusterfs process on kernel restart at
> start up.
>
> (gdb) bt
> #0 0x3fffad4463b0 in _IO_unbuffer_all () at
We have seen a SIGSEGV crash on glusterfs process on kernel restart at
start up.
(gdb) bt
#0 0x3fffad4463b0 in _IO_unbuffer_all () at genops.c:960
#1 _IO_cleanup () at genops.c:1020
#2 0x3fffad400d00 in __run_exit_handlers (status=,
listp=, run_list_atexit=run_list_atexit@entry=true)
Hello,
We are using glusterfs version 3.7.14, and have deployed the glusterfs servers
in containers. We are trying to use the "gluster volume add-brick" command to
extend a volume, but it fails:
gluster volume add-brick oam replica 3 172.01.01.01:/mnt/bricks/oam force
volume add-brick: failed:
Hi again,
in my limited - non full time programmer - understanding it's a memory
leak in the gluster fuse client.
Should I reopen the mentioned bugreport or open a new one? Or would the
community prefer an entirely different approach?
Thanks
Richard
On 13.09.18 10:07, Richard Neuboeck wrote:
>
12 matches
Mail list logo