+Poornima who works on parallel-readdir.
@Poornima, Have you seen anything like this before?
On 14 June 2018 at 10:07, Nithya Balachandran wrote:
> This is not the same issue as the one you are referring - that was in the
> RPC layer and caused the bricks to crash. This one is different as it
This is not the same issue as the one you are referring - that was in the
RPC layer and caused the bricks to crash. This one is different as it seems
to be in the dht and rda layers. It does look like a stack overflow though.
@Mohammad,
Please send the following information:
1. gluster volume
On Wed, Jun 13, 2018 at 3:39 PM, Brian Andrus wrote:
> All,
>
> I have a 5x3 Distributed-Replicate filesystem that has a few entries that
> do not clean up when being healed.
>
> I had tracked down what they were and since they were really just
> temp/expendable files, I moved the directory and
All,
I have a 5x3 Distributed-Replicate filesystem that has a few entries
that do not clean up when being healed.
I had tracked down what they were and since they were really just
temp/expendable files, I moved the directory and recreated what was needed.
Now those files in the recreated
Try
gluster volume set VOLNAME client.bind-insecure on
and remount clients. If servers refuse connection, you might also have to
set server.allow-insecure to on.
On Wed, Jun 13, 2018 at 9:41 AM, Milind Changire
wrote:
> On Wed, Jun 13, 2018 at 6:12 PM, Canh Ngo wrote:
>
>> Hi all,
>>
>> We
On Wed, Jun 13, 2018 at 6:12 PM, Canh Ngo wrote:
> Hi all,
>
> We run a storage cluster using GlusterFS v3.10.12 on CentOS7. Clients
> (CentOS) are using glusterfs 3.8.4.
>
> We notice when clients mounts bricks of a volume, sometimes glusterfs uses
> system ports (i.e. in port range 0-1024) to
Hi all,
We run a storage cluster using GlusterFS v3.10.12 on CentOS7. Clients
(CentOS) are using glusterfs 3.8.4.
We notice when clients mounts bricks of a volume, sometimes glusterfs uses
system ports (i.e. in port range 0-1024) to connect to remote glusterfsd
port. e.g:
Server:
tcp0
+Nithya
Nithya,
Do these logs [1] look similar to the recursive readdir() issue that you
encountered just a while back ?
i.e. recursive readdir() response definition in the XDR
[1] http://www-pnp.physics.ox.ac.uk/~mohammad/backtrace.log
On Wed, Jun 13, 2018 at 4:29 PM, mohammad kashif
wrote:
Hi Milind
Thanks a lot, I manage to run gdb and produced traceback as well. Its here
http://www-pnp.physics.ox.ac.uk/~mohammad/backtrace.log
I am trying to understand but still not able to make sense out of it.
Thanks
Kashif
On Wed, Jun 13, 2018 at 11:34 AM, Milind Changire
wrote:
>
Kashif,
FYI: http://debuginfo.centos.org/centos/6/storage/x86_64/
On Wed, Jun 13, 2018 at 3:21 PM, mohammad kashif
wrote:
> Hi Milind
>
> There is no glusterfs-debuginfo available for gluster-3.12 from
> http://mirror.centos.org/centos/6/storage/x86_64/gluster-3.12/ repo. Do
> you know from
Hi Milind
There is no glusterfs-debuginfo available for gluster-3.12 from
http://mirror.centos.org/centos/6/storage/x86_64/gluster-3.12/ repo. Do you
know from where I can get it?
Also when I run gdb, it says
Missing separate debuginfos, use: debuginfo-install
glusterfs-fuse-3.12.9-1.el6.x86_64
hi,
i'm testing some operations with gluster4 and glustercli.
I have re-installed a node with the same host name/IP and add it back to
the cluster.
This resulted in a double entry, this could be expected but then i am not
able to remove the old one nor the new nor any other node.
So at this point
On Wed, Jun 13, 2018 at 11:16:32AM +1200, Thing wrote:
> I am a bit lost here, why a replica 3 and arbiter 1? ie not replica2
> arbiter1?
You'd have to ask the developers about that (I just use gluster, I'm not
a dev). I agree that "replica 2 arbiter 1" seems more intuitive, but I
suppose
13 matches
Mail list logo