Ignore. I just realised you're on 3.7.14. So then the problem may not be
with granular entry self-heal feature.
-Krutika
On Tue, Aug 30, 2016 at 10:14 AM, Krutika Dhananjay
wrote:
> OK. Do you also have granular-entry-heal on - just so that I can isolate
> the problem area.
>
> -Krutika
>
> On
OK. Do you also have granular-entry-heal on - just so that I can isolate
the problem area.
-Krutika
On Tue, Aug 30, 2016 at 9:55 AM, Darrell Budic
wrote:
> I noticed that my new brick (replacement disk) did not have a .shard
> directory created on the brick, if that helps.
>
> I removed the aff
I noticed that my new brick (replacement disk) did not have a .shard directory
created on the brick, if that helps.
I removed the affected brick from the volume and then wiped the disk, did an
add-brick, and everything healed right up. I didn’t try and set any attrs or
anything else, just remo
I found an informative thread on a similar problem:
http://www.spinics.net/lists/gluster-devel/msg18400.html
According to the thread, it seems that the solution is to disable the
quota, which will clear the relevant xattrs and then re-enable the quota
which should force a recalc. I will try this
On Mon, Aug 29, 2016 at 7:01 AM, Anuradha Talur wrote:
>
>
> - Original Message -
> > From: "David Gossage"
> > To: "Anuradha Talur"
> > Cc: "gluster-users@gluster.org List" ,
> "Krutika Dhananjay"
> > Sent: Monday, August 29, 2016 5:12:42 PM
> > Subject: Re: [Gluster-users] 3.8.3 Shar
Just to let you know I’m seeing the same issue under 3.7.14 on CentOS 7. Some
content was healed correctly, now all the shards are queued up in a heal list,
but nothing is healing. Got similar brick errors logged to the ones David was
getting on the brick that isn’t healing:
[2016-08-29 03:31:4
Hi all,
Proposing the following:
Title: Object Storage with Gluster
Agenda
* Why object storage ?
* Swift API and Amazon S3 API
* Using swift to provide object interface to Gluster volume
* Object operations
* Demo
-Prashanth Pai
- Original Message -
> From: "Raghavendra G"
> To: "Ar
Got it. Thanks.
I tried the same test and shd crashed with SIGABRT (well, that's because I
compiled from src with -DDEBUG).
In any case, this error would prevent full heal from proceeding further.
I'm debugging the crash now. Will let you know when I have the RC.
-Krutika
On Mon, Aug 29, 2016 at
On Mon, Aug 29, 2016 at 7:14 AM, David Gossage
wrote:
> On Mon, Aug 29, 2016 at 5:25 AM, Krutika Dhananjay
> wrote:
>
>> Could you attach both client and brick logs? Meanwhile I will try these
>> steps out on my machines and see if it is easily recreatable.
>>
>>
> Hoping 7z files are accepted b
On Mon, Aug 29, 2016 at 7:14 AM, David Gossage
wrote:
> On Mon, Aug 29, 2016 at 5:25 AM, Krutika Dhananjay
> wrote:
>
>> Could you attach both client and brick logs? Meanwhile I will try these
>> steps out on my machines and see if it is easily recreatable.
>>
>>
> Hoping 7z files are accepted b
On Mon, Aug 29, 2016 at 7:01 AM, Anuradha Talur wrote:
>
>
> - Original Message -
> > From: "David Gossage"
> > To: "Anuradha Talur"
> > Cc: "gluster-users@gluster.org List" ,
> "Krutika Dhananjay"
> > Sent: Monday, August 29, 2016 5:12:42 PM
> > Subject: Re: [Gluster-users] 3.8.3 Shar
- Original Message -
> From: "David Gossage"
> To: "Anuradha Talur"
> Cc: "gluster-users@gluster.org List" , "Krutika
> Dhananjay"
> Sent: Monday, August 29, 2016 5:12:42 PM
> Subject: Re: [Gluster-users] 3.8.3 Shards Healing Glacier Slow
>
> On Mon, Aug 29, 2016 at 5:39 AM, Anuradha
On Mon, Aug 29, 2016 at 5:39 AM, Anuradha Talur wrote:
> Response inline.
>
> - Original Message -
> > From: "Krutika Dhananjay"
> > To: "David Gossage"
> > Cc: "gluster-users@gluster.org List"
> > Sent: Monday, August 29, 2016 3:55:04 PM
> > Subject: Re: [Gluster-users] 3.8.3 Shards H
Response inline.
- Original Message -
> From: "Krutika Dhananjay"
> To: "David Gossage"
> Cc: "gluster-users@gluster.org List"
> Sent: Monday, August 29, 2016 3:55:04 PM
> Subject: Re: [Gluster-users] 3.8.3 Shards Healing Glacier Slow
>
> Could you attach both client and brick logs? Me
Hello,
back after holidays. I don't saw any new relies after this last mail, I
hope I don't missed mails (too many mails to parse…).
BTW it seems that my problem is very similar to this opened bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1369364
-> memory usage always increasing for (here
Could you attach both client and brick logs? Meanwhile I will try these
steps out on my machines and see if it is easily recreatable.
-Krutika
On Mon, Aug 29, 2016 at 2:31 PM, David Gossage
wrote:
> Centos 7 Gluster 3.8.3
>
> Brick1: ccgl1.gl.local:/gluster1/BRICK1/1
> Brick2: ccgl2.gl.local:/g
Centos 7 Gluster 3.8.3
Brick1: ccgl1.gl.local:/gluster1/BRICK1/1
Brick2: ccgl2.gl.local:/gluster1/BRICK1/1
Brick3: ccgl4.gl.local:/gluster1/BRICK1/1
Options Reconfigured:
cluster.data-self-heal-algorithm: full
cluster.self-heal-daemon: on
cluster.locking-scheme: granular
features.shard-block-size:
X540-t2 now, but in the past we used Solarflare with no particular issues.
Il 26/ago/2016 22:32, "Diego Remolina" ha scritto:
> Servers now also come with the copper 10Gbit network adapters built in the
> motherboard (Dell R730, supermicro, etc). But for those that do not, I have
> used the Inte
18 matches
Mail list logo