Serkan,
I have gone through other mails in the mail thread as well but responding
to this one specifically.
Is this a source install or an RPM install ?
If this is an RPM install, could you please install the glusterfs-debuginfo
RPM and retry to capture the gdb backtrace.
If this is a source
I re-added gluster-users to get some more eye on this.
- Original Message -
> From: "Christoph Schäbel"
> To: "Ben Turner"
> Sent: Wednesday, August 30, 2017 8:18:31 AM
> Subject: Re: [Gluster-users] GFID attir is missing after adding
Hi Everton,
Thanks for your tip regarding the "reset-sync-time". I understand now that I
should have used this additional parameter in order to get rid of the CHANGELOG
files.
I will now manually delete them from all bricks. Also I have noticed the
following 3 geo-replication related volume
On Thu, Aug 24, 2017 at 11:19 AM, Serkan Çoban
wrote:
> Here you can find 10 stack trace samples from glusterd. I wait 10
> seconds between each trace.
> https://www.dropbox.com/s/9f36goq5xn3p1yt/glusterd_pstack.zip?dl=0
>
> Content of the first stack trace is here:
>
>
Hi Gaurav,
Any improvement about the issue?
On Tue, Aug 29, 2017 at 1:57 PM, Serkan Çoban wrote:
> glusterd returned to normal, here is the logs:
> https://www.dropbox.com/s/41jx2zn3uizvr53/80servers_glusterd_normal_status.zip?dl=0
>
>
> On Tue, Aug 29, 2017 at 1:47 PM,
Thank you for the acknowledgement.
On Thu, Aug 31, 2017 at 8:30 PM, mohammad kashif
wrote:
> Hi Atin
>
> Thanks, I was not running any script or gluster command. But now gluster
> status command started working. CPU usage also came down and looking at the
> ganglia graph,
Hey gluster experts,
We have a 20 physical server, replicate level 2, 40 brick cluster, the
first brick is showing errors such as attached paste. It's around a 1PB
system which is nearly full.
https://paste.ee/p/Dqdde
This seems to be a file too long error, as the link is going
Hi,
I have the following setup in place:
1 node: RancherOS having Rancher application for Kubernetes setup
2 nodes : RancherOS having Rancher agent
1 node : CentOS 7 workstation having kubectl installed and folder
cloned/downloaded from https://github.com/gluster/gluster-kubernetes using
BTW, I think it should be in 3.10.1 also.
We have back ported to 3.10.1 too.
If possible upgrade to 3.11.0 and see if you are seeing this messages or not.
- Original Message -
From: "Ashish Pandey"
To: "Amudhan P"
Cc: "Gluster Users"
Based on this BZ https://bugzilla.redhat.com/show_bug.cgi?id=1414287
it has been fixed in glusterfs-3.11.0
---
Ashish
- Original Message -
From: "Amudhan P"
To: "Ashish Pandey"
Cc: "Gluster Users"
Sent:
Ashish, which version has this issue fixed?
On Tue, Aug 29, 2017 at 6:38 PM, Amudhan P wrote:
> I am using 3.10.1 from which version this update is available.
>
>
> On Tue, Aug 29, 2017 at 5:03 PM, Ashish Pandey
> wrote:
>
>>
>> Whenever we do some fop
The RedHat documentation has a good process on how to clean a unusable
brick:
5.4.4. Cleaning An Unusable Brick
If the file system associated with the brick cannot be reformatted, and the
brick directory cannot be removed,
perform the following steps:
1 Delete all previously existing data in the
Hi Mabi,
If you will not use that geo-replication volume session again, I believe it
is safe to manually delete the files in the brick directory using rm -rf.
However, the gluster documentation specifies that if the session is to be
permanently deleted, this is the command to use:
gluster volume
13 matches
Mail list logo