Re: [Gluster-devel] failed heal

2015-02-05 Thread Pranith Kumar Karampuri


- Original Message -
> From: "Niels de Vos" 
> To: "Pranith Kumar Karampuri" 
> Cc: gluster-us...@gluster.org, "Gluster Devel" 
> Sent: Friday, February 6, 2015 2:32:36 AM
> Subject: Re: [Gluster-devel] failed heal
> 
> On Thu, Feb 05, 2015 at 11:21:58AM +0530, Pranith Kumar Karampuri wrote:
> > 
> > On 02/04/2015 11:52 PM, David F. Robinson wrote:
> > >I don't recall if that was before or after my upgrade.
> > >I'll forward you an email thread for the current heal issues which are
> > >after the 3.6.2 upgrade...
> > This is executed after the upgrade on just one machine. 3.6.2 entry locks
> > are not compatible with versions <= 3.5.3 and 3.6.1 that is the reason.
> > From
> > 3.5.4 and releases >=3.6.2 it should work fine.
> 
> Oh, I was not aware of this requirement. Does it mean we should not mix
> deployments with these versions (what about 3.4?) any longer? 3.5.4 has
> not been released yet, so anyone with a mixed 3.5/3.6.2 environment will
> hit these issues? Is this only for the self-heal daemon, or are the
> triggered/stat self-heal procedures affected too?
> 
> It should be noted *very* clearly in the release notes, and I think an
> announcement (email+blog) as a warning/reminder would be good. Could you
> get some details and advice written down, please?
Will do today.

Pranith
> 
> Thanks,
> Niels
> 
> 
> > 
> > Pranith
> > >David
> > >-- Original Message --
> > >From: "Pranith Kumar Karampuri"  > ><mailto:pkara...@redhat.com>>
> > >To: "David F. Robinson"  > ><mailto:david.robin...@corvidtec.com>>; "gluster-us...@gluster.org"
> > >mailto:gluster-us...@gluster.org>>; "Gluster
> > >Devel" mailto:gluster-devel@gluster.org>>
> > >Sent: 2/4/2015 2:33:20 AM
> > >Subject: Re: [Gluster-devel] failed heal
> > >>
> > >>On 02/02/2015 03:34 AM, David F. Robinson wrote:
> > >>>I have several files that gluster says it cannot heal. I deleted the
> > >>>files from all of the bricks
> > >>>(/data/brick0*/hpc_shared/motorsports/gmics/Raven/p3/*) and ran a full
> > >>>heal using 'gluster volume heal homegfs full'.  Even after the full
> > >>>heal, the entries below still show up.
> > >>>How do I clear these?
> > >>3.6.1 Had an issue where files undergoing I/O will also be shown in the
> > >>output of 'gluster volume heal  info', we addressed that in
> > >>3.6.2. Is this output from 3.6.1 by any chance?
> > >>
> > >>Pranith
> > >>>[root@gfs01a ~]# gluster volume heal homegfs info
> > >>>Gathering list of entries to be healed on volume homegfs has been
> > >>>successful
> > >>>Brick gfsib01a.corvidtec.com:/data/brick01a/homegfs
> > >>>Number of entries: 10
> > >>>/hpc_shared/motorsports/gmics/Raven/p3/70_rke/Movies
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>/hpc_shared/motorsports/gmics/Raven/p3/70_rke/.Convrg.swp
> > >>>/hpc_shared/motorsports/gmics/Raven/p3/70_rke
> > >>>Brick gfsib01b.corvidtec.com:/data/brick01b/homegfs
> > >>>Number of entries: 2
> > >>>
> > >>>/hpc_shared/motorsports/gmics/Raven/p3/70_rke
> > >>>Brick gfsib01a.corvidtec.com:/data/brick02a/homegfs
> > >>>Number of entries: 7
> > >>>
> > >>>/hpc_shared/motorsports/gmics/Raven/p3/70_rke/PICTURES/.tmpcheck
> > >>>/hpc_shared/motorsports/gmics/Raven/p3/70_rke/PICTURES
> > >>>/hpc_shared/motorsports/gmics/Raven/p3/70_rke/Movies
> > >>>
> > >>>
> > >>>
> > >>>Brick gfsib01b.corvidtec.com:/data/brick02b/homegfs
> > >>>Number of entries: 0
> > >>>Brick gfsib02a.corvidtec.com:/data/brick01a/homegfs
> > >>>Number of entries: 0
> > >>>Brick gfsib02b.corvidtec.com:/data/brick01b/homegfs
> > >>>Number of entries: 0
> > >>>Brick gfsib02a.corvidtec.com:/data/brick02a/homegfs
> > >>>Number of entries: 0
> > >>>Brick gfsib02b.corvidtec.com:/data/brick02b/homegfs
> > >>>Number of entries: 0
> > >>>===
> > >>>David F. Robinson, Ph.D.
> > >>>President - Corvid Technologies
> > >>>704.799.6944 x101 [office]
> > >>>704.252.1310 [cell]
> > >>>704.799.7974 [fax]
> > >>>david.robin...@corvidtec.com <mailto:david.robin...@corvidtec.com>
> > >>>http://www.corvidtechnologies.com <http://www.corvidtechnologies.com/>
> > >>>
> > >>>
> > >>>___
> > >>>Gluster-devel mailing list
> > >>>Gluster-devel@gluster.org
> > >>>http://www.gluster.org/mailman/listinfo/gluster-devel
> > >>
> > 
> 
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> 
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] failed heal

2015-02-05 Thread Niels de Vos
On Thu, Feb 05, 2015 at 11:21:58AM +0530, Pranith Kumar Karampuri wrote:
> 
> On 02/04/2015 11:52 PM, David F. Robinson wrote:
> >I don't recall if that was before or after my upgrade.
> >I'll forward you an email thread for the current heal issues which are
> >after the 3.6.2 upgrade...
> This is executed after the upgrade on just one machine. 3.6.2 entry locks
> are not compatible with versions <= 3.5.3 and 3.6.1 that is the reason. From
> 3.5.4 and releases >=3.6.2 it should work fine.

Oh, I was not aware of this requirement. Does it mean we should not mix
deployments with these versions (what about 3.4?) any longer? 3.5.4 has
not been released yet, so anyone with a mixed 3.5/3.6.2 environment will
hit these issues? Is this only for the self-heal daemon, or are the
triggered/stat self-heal procedures affected too?

It should be noted *very* clearly in the release notes, and I think an
announcement (email+blog) as a warning/reminder would be good. Could you
get some details and advice written down, please?

Thanks,
Niels


> 
> Pranith
> >David
> >-- Original Message --
> >From: "Pranith Kumar Karampuri"  ><mailto:pkara...@redhat.com>>
> >To: "David F. Robinson"  ><mailto:david.robin...@corvidtec.com>>; "gluster-us...@gluster.org"
> >mailto:gluster-us...@gluster.org>>; "Gluster
> >Devel" mailto:gluster-devel@gluster.org>>
> >Sent: 2/4/2015 2:33:20 AM
> >Subject: Re: [Gluster-devel] failed heal
> >>
> >>On 02/02/2015 03:34 AM, David F. Robinson wrote:
> >>>I have several files that gluster says it cannot heal. I deleted the
> >>>files from all of the bricks
> >>>(/data/brick0*/hpc_shared/motorsports/gmics/Raven/p3/*) and ran a full
> >>>heal using 'gluster volume heal homegfs full'.  Even after the full
> >>>heal, the entries below still show up.
> >>>How do I clear these?
> >>3.6.1 Had an issue where files undergoing I/O will also be shown in the
> >>output of 'gluster volume heal  info', we addressed that in
> >>3.6.2. Is this output from 3.6.1 by any chance?
> >>
> >>Pranith
> >>>[root@gfs01a ~]# gluster volume heal homegfs info
> >>>Gathering list of entries to be healed on volume homegfs has been
> >>>successful
> >>>Brick gfsib01a.corvidtec.com:/data/brick01a/homegfs
> >>>Number of entries: 10
> >>>/hpc_shared/motorsports/gmics/Raven/p3/70_rke/Movies
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>/hpc_shared/motorsports/gmics/Raven/p3/70_rke/.Convrg.swp
> >>>/hpc_shared/motorsports/gmics/Raven/p3/70_rke
> >>>Brick gfsib01b.corvidtec.com:/data/brick01b/homegfs
> >>>Number of entries: 2
> >>>
> >>>/hpc_shared/motorsports/gmics/Raven/p3/70_rke
> >>>Brick gfsib01a.corvidtec.com:/data/brick02a/homegfs
> >>>Number of entries: 7
> >>>
> >>>/hpc_shared/motorsports/gmics/Raven/p3/70_rke/PICTURES/.tmpcheck
> >>>/hpc_shared/motorsports/gmics/Raven/p3/70_rke/PICTURES
> >>>/hpc_shared/motorsports/gmics/Raven/p3/70_rke/Movies
> >>>
> >>>
> >>>
> >>>Brick gfsib01b.corvidtec.com:/data/brick02b/homegfs
> >>>Number of entries: 0
> >>>Brick gfsib02a.corvidtec.com:/data/brick01a/homegfs
> >>>Number of entries: 0
> >>>Brick gfsib02b.corvidtec.com:/data/brick01b/homegfs
> >>>Number of entries: 0
> >>>Brick gfsib02a.corvidtec.com:/data/brick02a/homegfs
> >>>Number of entries: 0
> >>>Brick gfsib02b.corvidtec.com:/data/brick02b/homegfs
> >>>Number of entries: 0
> >>>===
> >>>David F. Robinson, Ph.D.
> >>>President - Corvid Technologies
> >>>704.799.6944 x101 [office]
> >>>704.252.1310 [cell]
> >>>704.799.7974 [fax]
> >>>david.robin...@corvidtec.com <mailto:david.robin...@corvidtec.com>
> >>>http://www.corvidtechnologies.com <http://www.corvidtechnologies.com/>
> >>>
> >>>
> >>>___
> >>>Gluster-devel mailing list
> >>>Gluster-devel@gluster.org
> >>>http://www.gluster.org/mailman/listinfo/gluster-devel
> >>
> 

> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel



pgpssRXZEETwm.pgp
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] failed heal

2015-02-04 Thread Pranith Kumar Karampuri


On 02/04/2015 11:52 PM, David F. Robinson wrote:

I don't recall if that was before or after my upgrade.
I'll forward you an email thread for the current heal issues which are 
after the 3.6.2 upgrade...
This is executed after the upgrade on just one machine. 3.6.2 entry 
locks are not compatible with versions <= 3.5.3 and 3.6.1 that is the 
reason. From 3.5.4 and releases >=3.6.2 it should work fine.


Pranith

David
-- Original Message --
From: "Pranith Kumar Karampuri" <mailto:pkara...@redhat.com>>
To: "David F. Robinson" <mailto:david.robin...@corvidtec.com>>; "gluster-us...@gluster.org" 
mailto:gluster-us...@gluster.org>>; 
"Gluster Devel" <mailto:gluster-devel@gluster.org>>

Sent: 2/4/2015 2:33:20 AM
Subject: Re: [Gluster-devel] failed heal


On 02/02/2015 03:34 AM, David F. Robinson wrote:
I have several files that gluster says it cannot heal. I deleted the 
files from all of the bricks 
(/data/brick0*/hpc_shared/motorsports/gmics/Raven/p3/*) and ran a 
full heal using 'gluster volume heal homegfs full'.  Even after the 
full heal, the entries below still show up.

How do I clear these?
3.6.1 Had an issue where files undergoing I/O will also be shown in 
the output of 'gluster volume heal  info', we addressed that 
in 3.6.2. Is this output from 3.6.1 by any chance?


Pranith

[root@gfs01a ~]# gluster volume heal homegfs info
Gathering list of entries to be healed on volume homegfs has been 
successful

Brick gfsib01a.corvidtec.com:/data/brick01a/homegfs
Number of entries: 10
/hpc_shared/motorsports/gmics/Raven/p3/70_rke/Movies







/hpc_shared/motorsports/gmics/Raven/p3/70_rke/.Convrg.swp
/hpc_shared/motorsports/gmics/Raven/p3/70_rke
Brick gfsib01b.corvidtec.com:/data/brick01b/homegfs
Number of entries: 2

/hpc_shared/motorsports/gmics/Raven/p3/70_rke
Brick gfsib01a.corvidtec.com:/data/brick02a/homegfs
Number of entries: 7

/hpc_shared/motorsports/gmics/Raven/p3/70_rke/PICTURES/.tmpcheck
/hpc_shared/motorsports/gmics/Raven/p3/70_rke/PICTURES
/hpc_shared/motorsports/gmics/Raven/p3/70_rke/Movies



Brick gfsib01b.corvidtec.com:/data/brick02b/homegfs
Number of entries: 0
Brick gfsib02a.corvidtec.com:/data/brick01a/homegfs
Number of entries: 0
Brick gfsib02b.corvidtec.com:/data/brick01b/homegfs
Number of entries: 0
Brick gfsib02a.corvidtec.com:/data/brick02a/homegfs
Number of entries: 0
Brick gfsib02b.corvidtec.com:/data/brick02b/homegfs
Number of entries: 0
===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
david.robin...@corvidtec.com <mailto:david.robin...@corvidtec.com>
http://www.corvidtechnologies.com <http://www.corvidtechnologies.com/>


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] failed heal

2015-02-04 Thread David F. Robinson

I don't recall if that was before or after my upgrade.
I'll forward you an email thread for the current heal issues which are 
after the 3.6.2 upgrade...


David


-- Original Message --
From: "Pranith Kumar Karampuri" 
To: "David F. Robinson" ; 
"gluster-us...@gluster.org" ; "Gluster Devel" 


Sent: 2/4/2015 2:33:20 AM
Subject: Re: [Gluster-devel] failed heal



On 02/02/2015 03:34 AM, David F. Robinson wrote:
I have several files that gluster says it cannot heal.  I deleted the 
files from all of the bricks 
(/data/brick0*/hpc_shared/motorsports/gmics/Raven/p3/*) and ran a full 
heal using 'gluster volume heal homegfs full'.  Even after the full 
heal, the entries below still show up.

How do I clear these?
3.6.1 Had an issue where files undergoing I/O will also be shown in the 
output of 'gluster volume heal  info', we addressed that in 
3.6.2. Is this output from 3.6.1 by any chance?


Pranith



[root@gfs01a ~]# gluster volume heal homegfs info
Gathering list of entries to be healed on volume homegfs has been 
successful


Brick gfsib01a.corvidtec.com:/data/brick01a/homegfs
Number of entries: 10
/hpc_shared/motorsports/gmics/Raven/p3/70_rke/Movies







/hpc_shared/motorsports/gmics/Raven/p3/70_rke/.Convrg.swp
/hpc_shared/motorsports/gmics/Raven/p3/70_rke

Brick gfsib01b.corvidtec.com:/data/brick01b/homegfs
Number of entries: 2

/hpc_shared/motorsports/gmics/Raven/p3/70_rke

Brick gfsib01a.corvidtec.com:/data/brick02a/homegfs
Number of entries: 7

/hpc_shared/motorsports/gmics/Raven/p3/70_rke/PICTURES/.tmpcheck
/hpc_shared/motorsports/gmics/Raven/p3/70_rke/PICTURES
/hpc_shared/motorsports/gmics/Raven/p3/70_rke/Movies




Brick gfsib01b.corvidtec.com:/data/brick02b/homegfs
Number of entries: 0

Brick gfsib02a.corvidtec.com:/data/brick01a/homegfs
Number of entries: 0

Brick gfsib02b.corvidtec.com:/data/brick01b/homegfs
Number of entries: 0

Brick gfsib02a.corvidtec.com:/data/brick02a/homegfs
Number of entries: 0

Brick gfsib02b.corvidtec.com:/data/brick02b/homegfs
Number of entries: 0


===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
david.robin...@corvidtec.com
http://www.corvidtechnologies.com




___ Gluster-devel mailing 
list 
Gluster-devel@gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] failed heal

2015-02-03 Thread Pranith Kumar Karampuri


On 02/02/2015 03:34 AM, David F. Robinson wrote:
I have several files that gluster says it cannot heal.  I deleted the 
files from all of the bricks 
(/data/brick0*/hpc_shared/motorsports/gmics/Raven/p3/*) and ran a full 
heal using 'gluster volume heal homegfs full'.  Even after the full 
heal, the entries below still show up.

How do I clear these?
3.6.1 Had an issue where files undergoing I/O will also be shown in the 
output of 'gluster volume heal  info', we addressed that in 
3.6.2. Is this output from 3.6.1 by any chance?


Pranith

[root@gfs01a ~]# gluster volume heal homegfs info
Gathering list of entries to be healed on volume homegfs has been 
successful

Brick gfsib01a.corvidtec.com:/data/brick01a/homegfs
Number of entries: 10
/hpc_shared/motorsports/gmics/Raven/p3/70_rke/Movies







/hpc_shared/motorsports/gmics/Raven/p3/70_rke/.Convrg.swp
/hpc_shared/motorsports/gmics/Raven/p3/70_rke
Brick gfsib01b.corvidtec.com:/data/brick01b/homegfs
Number of entries: 2

/hpc_shared/motorsports/gmics/Raven/p3/70_rke
Brick gfsib01a.corvidtec.com:/data/brick02a/homegfs
Number of entries: 7

/hpc_shared/motorsports/gmics/Raven/p3/70_rke/PICTURES/.tmpcheck
/hpc_shared/motorsports/gmics/Raven/p3/70_rke/PICTURES
/hpc_shared/motorsports/gmics/Raven/p3/70_rke/Movies



Brick gfsib01b.corvidtec.com:/data/brick02b/homegfs
Number of entries: 0
Brick gfsib02a.corvidtec.com:/data/brick01a/homegfs
Number of entries: 0
Brick gfsib02b.corvidtec.com:/data/brick01b/homegfs
Number of entries: 0
Brick gfsib02a.corvidtec.com:/data/brick02a/homegfs
Number of entries: 0
Brick gfsib02b.corvidtec.com:/data/brick02b/homegfs
Number of entries: 0
===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
david.robin...@corvidtec.com 
http://www.corvidtechnologies.com


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel