On 07/15/2016 10:56 PM, Kingsley wrote:
On Fri, 2016-07-15 at 22:24 +0530, Ravishankar N wrote:
On 07/15/2016 09:55 PM, Kingsley wrote:
This has revealed something. I'm now seeing lots of lines like this in
the shd log:
[2016-07-15 16:20:51.098152] D [afr-self-heald.c:516:afr_shd_index_sweep]
On Fri, Jul 15, 2016 at 5:20 PM, Jesper Led Lauridsen TS Infra server <
j...@dr.dk> wrote:
> Hi,
>
> How do I determine in which log etc. that a healing is in progress or
> startet and how do I if not startet force it.
>
> Additional info, is that I have some problem with a volume if I execute
>
Could you give volume info output?
On Fri, Jul 15, 2016 at 12:10 PM, Angelo Rivera wrote:
> regarding stripe/replicate configuration, currently I'm running this
> setup with 4 server in it with a total of 190GB, my question is.
>
> if in time I already filled-up the 190GB of
On Fri, 2016-07-15 at 22:24 +0530, Ravishankar N wrote:
> On 07/15/2016 09:55 PM, Kingsley wrote:
> > This has revealed something. I'm now seeing lots of lines like this in
> > the shd log:
> >
> > [2016-07-15 16:20:51.098152] D [afr-self-heald.c:516:afr_shd_index_sweep]
> >
On 07/15/2016 09:55 PM, Kingsley wrote:
This has revealed something. I'm now seeing lots of lines like this in
the shd log:
[2016-07-15 16:20:51.098152] D [afr-self-heald.c:516:afr_shd_index_sweep]
0-callrec-replicate-0: got entry: eaa43674-b1a3-4833-a946-de7b7121bb88
[2016-07-15
On Fri, 2016-07-15 at 21:41 +0530, Ravishankar N wrote:
> On 07/15/2016 09:32 PM, Kingsley wrote:
> > On Fri, 2016-07-15 at 21:06 +0530, Ravishankar N wrote:
> >> On 07/15/2016 08:48 PM, Kingsley wrote:
> >>> I don't have star installed so I used ls,
> >> Oops typo. I meant `stat`.
> >>>but
On 07/15/2016 09:32 PM, Kingsley wrote:
On Fri, 2016-07-15 at 21:06 +0530, Ravishankar N wrote:
On 07/15/2016 08:48 PM, Kingsley wrote:
I don't have star installed so I used ls,
Oops typo. I meant `stat`.
but yes they all have 2 links
to them (see below).
Everything seems to be in place
On Fri, 2016-07-15 at 21:06 +0530, Ravishankar N wrote:
> On 07/15/2016 08:48 PM, Kingsley wrote:
> > I don't have star installed so I used ls,
> Oops typo. I meant `stat`.
> > but yes they all have 2 links
> > to them (see below).
> >
> Everything seems to be in place for the heal to happen.
Hi Xavier,
Sorry for the delay.
Thanks for you clear and helpful explanation.
This configuration went well, providing the expected result.
Dimitri
On 07/07/2016 11:01, Xavier Hernandez wrote:
Hi Dimitri,
On 06/07/16 18:13, Dimitri Pertin wrote:
Hi everyone,
I am trying to configure a
On 07/15/2016 08:48 PM, Kingsley wrote:
I don't have star installed so I used ls,
Oops typo. I meant `stat`.
but yes they all have 2 links
to them (see below).
Everything seems to be in place for the heal to happen. Can you tailf
the output of shd logs on all nodes and manually launch
On 07/15/2016 08:23 PM, Rob Janney wrote:
Currently we are on 3.6: glusterfs 3.6.9 built on Mar 2 2016 18:21:17
That version does not have the real-time gfapi based version of the
`info split-brain` command and it just prints prior event history from
the selfheal daemon's memory. You can
On Fri, 2016-07-15 at 20:31 +0530, Ravishankar N wrote:
> On 07/15/2016 06:49 PM, Kingsley wrote:
> > On Fri, 2016-07-15 at 18:38 +0530, Ravishankar N wrote:
> >> On 07/15/2016 06:05 PM, Kingsley wrote:
> >>> chomp (my @output=`getfattr $path`);
> >>
> >> Could you try with `getfattr -d -m. -e
On 07/15/2016 06:49 PM, Kingsley wrote:
On Fri, 2016-07-15 at 18:38 +0530, Ravishankar N wrote:
On 07/15/2016 06:05 PM, Kingsley wrote:
chomp (my @output=`getfattr $path`);
Could you try with `getfattr -d -m. -e hex $path` ?
Sure. I'm not really sure what I should be seeing, so I've
Currently we are on 3.6: glusterfs 3.6.9 built on Mar 2 2016 18:21:17
Yes, the file gfid file still was present at
/.glusterfs/ab/da/abdab36b-b90b-4201-98fe-7a36059da81d
On Thu, Jul 14, 2016 at 7:51 PM, Ravishankar N
wrote:
>
>
> On 07/15/2016 02:24 AM, Rob Janney
On Fri, 2016-07-15 at 18:38 +0530, Ravishankar N wrote:
> On 07/15/2016 06:05 PM, Kingsley wrote:
> >chomp (my @output=`getfattr $path`);
>
>
> Could you try with `getfattr -d -m. -e hex $path` ?
Sure. I'm not really sure what I should be seeing, so I've uploaded the
full output here as
On 07/15/2016 06:05 PM, Kingsley wrote:
chomp (my @output=`getfattr $path`);
Could you try with `getfattr -d -m. -e hex $path` ?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
Hi Ravi, thanks for replying.
I've checked all bricks for their respective gfid files but either the
files don't exist or getfattr produces no output.
What I've also found is that the gfid list shown for the 3 bricks that
stayed up contains the same list of entries, albeit not all in the same
Hi,
How do I determine in which log etc. that a healing is in progress or startet
and how do I if not startet force it.
Additional info, is that I have some problem with a volume if I execute
'gluster volume heal info' the command just hangs but if I execute
'gluster volume heal info
Can you check the getfattr output of a few of those 129 entries from all
bricks? You basically need to see if there are non zero afr-xattrs for
the files in question which would indicate a pending heal.
-Ravi
On 07/08/2016 03:12 PM, Kingsley wrote:
Further to this, I've noticed something
Can anybody help me with this?
Cheers,
Kingsley.
On Fri, 2016-07-08 at 10:08 +0100, Kingsley wrote:
> Hi,
>
> One of our bricks was offline for a few days when it didn't reboot after
> a yum update (the gluster version wasn't changed). The volume heal info
> is showing the same 129 entries, all
2016-07-14 17:36 GMT+02:00 Alastair Neil :
> I am not sure if your nics support it but you could try balance-alb (bonding
> mode 6), this does not require special switch support and I have had good
> results with it. As Lindsey said the switch configuration could be
regarding stripe/replicate configuration, currently I'm running this
setup with 4 server in it with a total of 190GB, my question is.
if in time I already filled-up the 190GB of my space. how I'm a going to
add new server in it?
is it ok if I just added 2 more server or I needed to add 4
22 matches
Mail list logo