Re: [Gluster-users] replica position

2015-02-03 Thread yang . bin18
<<< 
收件人: yang.bi...@zte.com.cn, gluster-users@gluster.org, 
日期: 2015/01/28 16:40 
主题:Re: [Gluster-users] replica position 




On 01/27/2015 02:43 PM, yang.bi...@zte.com.cn wrote: 

Hi, 

Can the replica position be configured ? 
Could you give an example?

Pranith 

Regards. 

Bin.Yang 










ZTE Information Security Notice: The information contained in this mail 
(and any attachment transmitted herewith) is privileged and confidential 
and is intended for the exclusive use of the addressee(s).  If you are not 
an intended recipient, any disclosure, reproduction, distribution or other 
dissemination or use of the information contained is strictly prohibited. 
If you have received this mail in error, please delete it and notify us 
immediately.





___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users 




ZTE Information Security Notice: The information contained in this mail 
(and any attachment transmitted herewith) is privileged and confidential 
and is intended for the exclusive use of the addressee(s).  If you are not 
an intended recipient, any disclosure, reproduction, distribution or other 
dissemination or use of the information contained is strictly prohibited. 
If you have received this mail in error, please delete it and notify us 
immediately.







ZTE Information Security Notice: The information contained in this mail 
(and any attachment transmitted herewith) is privileged and confidential 
and is intended for the exclusive use of the addressee(s).  If you are not 
an intended recipient, any disclosure, reproduction, distribution or other 
dissemination or use of the information contained is strictly prohibited. 
If you have received this mail in error, please delete it and notify us 
immediately.







ZTE Information Security Notice: The information contained in this mail 
(and any attachment transmitted herewith) is privileged and confidential 
and is intended for the exclusive use of the addressee(s).  If you are not 
an intended recipient, any disclosure, reproduction, distribution or other 
dissemination or use of the information contained is strictly prohibited. 
If you have received this mail in error, please delete it and notify us 
immediately.







ZTE Information Security Notice: The information contained in this mail 
(and any attachment transmitted herewith) is privileged and confidential 
and is intended for the exclusive use of the addressee(s).  If you are not 
an intended recipient, any disclosure, reproduction, distribution or other 
dissemination or use of the information contained is strictly prohibited. 
If you have received this mail in error, please delete it and notify us 
immediately.






ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] striped replicated for VMs?

2015-02-03 Thread Pranith Kumar Karampuri


On 01/31/2015 10:45 AM, Craig Yoshioka wrote:

Quick question:

I’ve been using GlusterFS for about a year now to hold some VM images 
(KVM/QEMU).  My setup has been 2 replica servers, but I’m now interested in 
adding another two more servers to improve performance and space.

The volume is currently configured as “distributed” (of just one replica set), 
should I (a) change it to striped for better VM performance, and (b) If I 
should change it, can a distributed volume be switched to stripe and rebalanced 
while live?  Or should I just add another brick directory to each of the 
servers and create a new (striped|distributed) replica volume?
Striped volumes are not in use very widely. There is a feature called 
sharding that is planned for 4.0 which should meet the need you are 
asking for. We are trying to pull this off for 3.7 itself.
You can find more details about it at: 
http://www.gluster.org/community/documentation/index.php/Features/sharding-xlator


Pranith


Thanks!
-Craig
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Ideas list for GSOC

2015-02-03 Thread Kaushal M
Hi list,

Applying as an organization for GSOC requires us to prepare a publicly
accessible list of project ideas.

Anyone within the community who has an idea for projects, and are willing
to mentor students, please add the idea at [1]. (Add them even if you
can't/won't mentor as well). There is already an existing list of ideas on
the page, most of them without any mentors. Any one wishing to just be a
mentor can pick one/a few from this list and add yourselves as mentors.

I'm hoping we can build a sizable list by the time the organization
registration opens next week.

Thanks.

~kaushal

[1]: http://www.gluster.org/community/documentation/index.php/Projects
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] failed heal

2015-02-03 Thread Pranith Kumar Karampuri


On 02/02/2015 03:34 AM, David F. Robinson wrote:
I have several files that gluster says it cannot heal.  I deleted the 
files from all of the bricks 
(/data/brick0*/hpc_shared/motorsports/gmics/Raven/p3/*) and ran a full 
heal using 'gluster volume heal homegfs full'.  Even after the full 
heal, the entries below still show up.

How do I clear these?
3.6.1 Had an issue where files undergoing I/O will also be shown in the 
output of 'gluster volume heal  info', we addressed that in 
3.6.2. Is this output from 3.6.1 by any chance?


Pranith

[root@gfs01a ~]# gluster volume heal homegfs info
Gathering list of entries to be healed on volume homegfs has been 
successful

Brick gfsib01a.corvidtec.com:/data/brick01a/homegfs
Number of entries: 10
/hpc_shared/motorsports/gmics/Raven/p3/70_rke/Movies







/hpc_shared/motorsports/gmics/Raven/p3/70_rke/.Convrg.swp
/hpc_shared/motorsports/gmics/Raven/p3/70_rke
Brick gfsib01b.corvidtec.com:/data/brick01b/homegfs
Number of entries: 2

/hpc_shared/motorsports/gmics/Raven/p3/70_rke
Brick gfsib01a.corvidtec.com:/data/brick02a/homegfs
Number of entries: 7

/hpc_shared/motorsports/gmics/Raven/p3/70_rke/PICTURES/.tmpcheck
/hpc_shared/motorsports/gmics/Raven/p3/70_rke/PICTURES
/hpc_shared/motorsports/gmics/Raven/p3/70_rke/Movies



Brick gfsib01b.corvidtec.com:/data/brick02b/homegfs
Number of entries: 0
Brick gfsib02a.corvidtec.com:/data/brick01a/homegfs
Number of entries: 0
Brick gfsib02b.corvidtec.com:/data/brick01b/homegfs
Number of entries: 0
Brick gfsib02a.corvidtec.com:/data/brick02a/homegfs
Number of entries: 0
Brick gfsib02b.corvidtec.com:/data/brick02b/homegfs
Number of entries: 0
===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
david.robin...@corvidtec.com 
http://www.corvidtechnologies.com


___
Gluster-devel mailing list
gluster-de...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] replica position

2015-02-03 Thread Pranith Kumar Karampuri

On 02/04/2015 12:54 PM, yang.bi...@zte.com.cn wrote:
>
>
>
>   
>
>
>
>
>
>
>
> 
>
>
>
>
>
>
>
>   
>
>
>
>
>
>
>
>
> 
>
>
>
>
>
>
>
>
>
> 
>
>
>
>
>
>
> <
> < reason ,we want the data wrote into brick B or brick C.
>  not come until the write happens on A, B, C. Order of the bricks does
> not matter.
>
> Pranith
>
> <<< is a DHT one?
> << position of the bricks?
> Pranith
>
> < bricks,there is no replica.When client(fuse) in brick A write data
> into volume,for some reason ,we want the data wrote into brick B or
> brick C.
>  either in brick A, B, or C based on the hash of the filename and
> layout of the directory not on all of the bricks A, B, C. Only the
> directories are created on all the bricks A, B, C.
>
> Pranith
> For safty thinking ,we want the data will be directed to any brick
> except brick A.
It depends on the hash the filename generates. It may very well end up
on brick-A. What is the usecase you want to address?

Pranith
>
>
> 发件人: Pranith Kumar Karampuri __
> 
> 收件人: _yang.bi...@zte.com.cn_ ,
> _gluster-users@gluster.org_ ,
> 日期: 2015/01/28 16:40
> 主题: Re: [Gluster-users] replica position
> 
>
>
>
>
> On 01/27/2015 02:43 PM, _yang.bi...@zte.com.cn_
> wrote:
>
> Hi,
>
> Can the replica position be configured ?
> Could you give an example?
>
> Pranith
>
> Regards.
>
> Bin.Yang
>
>   
>
>
>
>
>
>
>
>
> 
> ZTE Information Security Notice: The information contained in this
> mail (and any attachment transmitted herewith) is privileged and
> confidential and is intended for the exclusive use of the
> addressee(s). If you are not an intended recipient, any disclosure,
> reproduction, distribution or other dissemination or use of the
> information contained is strictly prohibited. If you have received
> this mail in error, please delete it and notify us immediately.
>
>
>
>
>
> ___
> Gluster-users mailing list_
> __Gluster-users@gluster.org_ _
> __http://www.gluster.org/mailman/listinfo/gluster-users_
>
>
>
> 
> ZTE Information Security Notice: The information contained in this
> mail (and any attachment transmitted herewith) is privileged and
> confidential and is intended for the exclusive use of the
> addressee(s). If you are not an intended recipient, any disclosure,
> reproduction, distribution or other dissemination or use of the
> information contained is strictly prohibited. If you have received
> this mail in error, please delete it and notify us immediately.
>
>
>
>
>
>
> 
> ZTE Information Security Notice: The information contained in this
> mail (and any attachment transmitted herewith) is privileged and
> confidential and is intended for the exclusive use of the
> addressee(s). If you are not an intended recipient, any disclosure,
> reproduction, distribution or other dissemination or use of the
> information contained is strictly prohibited. If you have received
> this mail in error, please delete it and notify us immediately.
>
>
>
>
>
>
> 
> ZTE Information Security Notice: The information contained in this
> mail (and any attachment transmitted herewith) is privileged and
> confidential and is intended for the exclusive use of the
> addressee(s). If you are not an intended recipient, any disclosure,
> reproduction, distribution or other dissemination or use of the
> information contained is strictly prohibited. If you have received
> this mail in error, please delete it and notify us immediately.
>
>
>
>
>
>
> 
> ZTE Information Security Notice: The information contained in this mail (and 
> any attachment transmitted herewith) is privileged and confidential and is 
> intended for the exclusive use of the addressee(s).  If you are not an 
> intended recipient, any disclosure, reproduction, distribution or other 
> dissemination or use of the information contained is strictly prohibited.  If 
> you have received this mail in error, please delete it and notify us 
> immediately.
>
>

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] replica position

2015-02-03 Thread yang . bin18
< 
收件人: yang.bi...@zte.com.cn, gluster-users@gluster.org, 
日期: 2015/01/28 16:40 
主题:Re: [Gluster-users] replica position 




On 01/27/2015 02:43 PM, yang.bi...@zte.com.cn wrote: 

Hi, 

Can the replica position be configured ? 
Could you give an example?

Pranith 

Regards. 

Bin.Yang 









ZTE Information Security Notice: The information contained in this mail 
(and any attachment transmitted herewith) is privileged and confidential 
and is intended for the exclusive use of the addressee(s).  If you are not 
an intended recipient, any disclosure, reproduction, distribution or other 
dissemination or use of the information contained is strictly prohibited. 
If you have received this mail in error, please delete it and notify us 
immediately.





___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users 




ZTE Information Security Notice: The information contained in this mail 
(and any attachment transmitted herewith) is privileged and confidential 
and is intended for the exclusive use of the addressee(s).  If you are not 
an intended recipient, any disclosure, reproduction, distribution or other 
dissemination or use of the information contained is strictly prohibited. 
If you have received this mail in error, please delete it and notify us 
immediately.







ZTE Information Security Notice: The information contained in this mail 
(and any attachment transmitted herewith) is privileged and confidential 
and is intended for the exclusive use of the addressee(s).  If you are not 
an intended recipient, any disclosure, reproduction, distribution or other 
dissemination or use of the information contained is strictly prohibited. 
If you have received this mail in error, please delete it and notify us 
immediately.







ZTE Information Security Notice: The information contained in this mail 
(and any attachment transmitted herewith) is privileged and confidential 
and is intended for the exclusive use of the addressee(s).  If you are not 
an intended recipient, any disclosure, reproduction, distribution or other 
dissemination or use of the information contained is strictly prohibited. 
If you have received this mail in error, please delete it and notify us 
immediately.






ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] replica position

2015-02-03 Thread Pranith Kumar Karampuri

On 02/04/2015 12:40 PM, yang.bi...@zte.com.cn wrote:
>
>
>
>   
>
>
>
>
>
>
> 发件人: Pranith Kumar Karampuri 
> 收件人: yang.bi...@zte.com.cn,
> 抄送: gluster-users@gluster.org
> 日期: 2015/02/04 14:59
> 主题: Re: [Gluster-users] replica position
> 
>
>
>
>
> On 02/04/2015 12:23 PM, _yang.bi...@zte.com.cn_
> wrote:
>
>
>
>
> 
>
>
>
>
> On 01/30/2015 06:43 AM, _yang.bi...@zte.com.cn_
> wrote:
>
> <<<
> <<< reason ,we want the data wrote into brick B or brick C.
> << come until the write happens on A, B, C. Order of the bricks does not
> matter.
>
> Pranith
>
> < a DHT one?
>  position of the bricks?
> Pranith
>
> For simplification,let`s suppose it being a DHT volume with three
> bricks,there is no replica.When client(fuse) in brick A write data
> into volume,for some reason ,we want the data wrote into brick B or
> brick C.
If it is DHT volume, then the data of the files will be residing either
in brick A, B, or C based on the hash of the filename and layout of the
directory not on all of the bricks A, B, C. Only the directories are
created on all the bricks A, B, C.

Pranith
>
>
>
> 发件人: Pranith Kumar Karampuri __
> 
> 收件人: _yang.bi...@zte.com.cn_ ,
> _gluster-users@gluster.org_ ,
> 日期: 2015/01/28 16:40
> 主题: Re: [Gluster-users] replica position
> 
>
>
>
>
> On 01/27/2015 02:43 PM, _yang.bi...@zte.com.cn_
> wrote:
>
> Hi,
>
> Can the replica position be configured ?
> Could you give an example?
>
> Pranith
>
> Regards.
>
> Bin.Yang
>
>   
>
>
>
>
>
>
>
> 
> ZTE Information Security Notice: The information contained in this
> mail (and any attachment transmitted herewith) is privileged and
> confidential and is intended for the exclusive use of the
> addressee(s). If you are not an intended recipient, any disclosure,
> reproduction, distribution or other dissemination or use of the
> information contained is strictly prohibited. If you have received
> this mail in error, please delete it and notify us immediately.
>
>
>
>
>
> ___
> Gluster-users mailing list_
> __Gluster-users@gluster.org_ _
> __http://www.gluster.org/mailman/listinfo/gluster-users_
>
>
>
> 
> ZTE Information Security Notice: The information contained in this
> mail (and any attachment transmitted herewith) is privileged and
> confidential and is intended for the exclusive use of the
> addressee(s). If you are not an intended recipient, any disclosure,
> reproduction, distribution or other dissemination or use of the
> information contained is strictly prohibited. If you have received
> this mail in error, please delete it and notify us immediately.
>
>
>
>
>
>
> 
> ZTE Information Security Notice: The information contained in this
> mail (and any attachment transmitted herewith) is privileged and
> confidential and is intended for the exclusive use of the
> addressee(s). If you are not an intended recipient, any disclosure,
> reproduction, distribution or other dissemination or use of the
> information contained is strictly prohibited. If you have received
> this mail in error, please delete it and notify us immediately.
>
>
>
>
>
>
> 
> ZTE Information Security Notice: The information contained in this mail (and 
> any attachment transmitted herewith) is privileged and confidential and is 
> intended for the exclusive use of the addressee(s).  If you are not an 
> intended recipient, any disclosure, reproduction, distribution or other 
> dissemination or use of the information contained is strictly prohibited.  If 
> you have received this mail in error, please delete it and notify us 
> immediately.
>
>

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] replica position

2015-02-03 Thread yang . bin18
发件人: Pranith Kumar Karampuri 
收件人: yang.bi...@zte.com.cn, 
抄送:   gluster-users@gluster.org
日期:   2015/02/04 14:59
主题:   Re: [Gluster-users] replica position




On 02/04/2015 12:23 PM, yang.bi...@zte.com.cn wrote:








On 01/30/2015 06:43 AM, yang.bi...@zte.com.cn wrote: 

<<< 
收件人: yang.bi...@zte.com.cn, gluster-users@gluster.org, 
日期: 2015/01/28 16:40 
主题:Re: [Gluster-users] replica position 




On 01/27/2015 02:43 PM, yang.bi...@zte.com.cn wrote: 

Hi, 

Can the replica position be configured ? 
Could you give an example?

Pranith 

Regards. 

Bin.Yang 








ZTE Information Security Notice: The information contained in this mail 
(and any attachment transmitted herewith) is privileged and confidential 
and is intended for the exclusive use of the addressee(s).  If you are not 
an intended recipient, any disclosure, reproduction, distribution or other 
dissemination or use of the information contained is strictly prohibited. 
If you have received this mail in error, please delete it and notify us 
immediately.





___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users 




ZTE Information Security Notice: The information contained in this mail 
(and any attachment transmitted herewith) is privileged and confidential 
and is intended for the exclusive use of the addressee(s).  If you are not 
an intended recipient, any disclosure, reproduction, distribution or other 
dissemination or use of the information contained is strictly prohibited. 
If you have received this mail in error, please delete it and notify us 
immediately.







ZTE Information Security Notice: The information contained in this mail 
(and any attachment transmitted herewith) is privileged and confidential 
and is intended for the exclusive use of the addressee(s).  If you are not 
an intended recipient, any disclosure, reproduction, distribution or other 
dissemination or use of the information contained is strictly prohibited. 
If you have received this mail in error, please delete it and notify us 
immediately.






ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] replica position

2015-02-03 Thread Pranith Kumar Karampuri

On 02/04/2015 12:23 PM, yang.bi...@zte.com.cn wrote:
>
>
>
>
> 
>
>
>
>
> On 01/30/2015 06:43 AM, _yang.bi...@zte.com.cn_
> wrote:
>
> <
>  ,we want the data wrote into brick B or brick C.
> When a write is performed from a fuse mount, the response will not
> come until the write happens on A, B, C. Order of the bricks does not
> matter.
>
> Pranith
>
> If the volume is configured to has only one replica or the volume is a
> DHT one?
Then the bricks are not in replication. Do you want to change the
position of the bricks?

Pranith
>
> 发件人: Pranith Kumar Karampuri __
> 
> 收件人: _yang.bi...@zte.com.cn_ ,
> _gluster-users@gluster.org_ ,
> 日期: 2015/01/28 16:40
> 主题: Re: [Gluster-users] replica position
> 
>
>
>
>
> On 01/27/2015 02:43 PM, _yang.bi...@zte.com.cn_
> wrote:
>
> Hi,
>
> Can the replica position be configured ?
> Could you give an example?
>
> Pranith
>
> Regards.
>
> Bin.Yang
>
>   
>
>
>
>
>
>
> 
> ZTE Information Security Notice: The information contained in this
> mail (and any attachment transmitted herewith) is privileged and
> confidential and is intended for the exclusive use of the
> addressee(s). If you are not an intended recipient, any disclosure,
> reproduction, distribution or other dissemination or use of the
> information contained is strictly prohibited. If you have received
> this mail in error, please delete it and notify us immediately.
>
>
>
>
>
> ___
> Gluster-users mailing list_
> __Gluster-users@gluster.org_ _
> __http://www.gluster.org/mailman/listinfo/gluster-users_
>
>
>
> 
> ZTE Information Security Notice: The information contained in this
> mail (and any attachment transmitted herewith) is privileged and
> confidential and is intended for the exclusive use of the
> addressee(s). If you are not an intended recipient, any disclosure,
> reproduction, distribution or other dissemination or use of the
> information contained is strictly prohibited. If you have received
> this mail in error, please delete it and notify us immediately.
>
>
>
>
>
>
> 
> ZTE Information Security Notice: The information contained in this mail (and 
> any attachment transmitted herewith) is privileged and confidential and is 
> intended for the exclusive use of the addressee(s).  If you are not an 
> intended recipient, any disclosure, reproduction, distribution or other 
> dissemination or use of the information contained is strictly prohibited.  If 
> you have received this mail in error, please delete it and notify us 
> immediately.
>
>

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] ... i was able to produce a split brain...

2015-02-03 Thread Pranith Kumar Karampuri


On 02/03/2015 10:42 PM, Ted Miller wrote:


On 1/31/2015 12:47 AM, Pranith Kumar Karampuri wrote:


On 01/30/2015 06:28 PM, Jeff Darcy wrote:
Pranith and I had a discussion regarding this issue and here is 
what we have

in our mind right now.

We plan to provide the user commands to execute from mount so that 
he can
access the files in split-brain. This way he can choose which copy 
is to be

used as source. The user will have to perform a set of getfattrs and
setfattrs (on virtual xattrs) to decide which child to choose as 
source and

inform AFR with his decision.

A) To know the split-brain status :
getfattr -n trusted.afr.split-brain-status 

This will provide user with the following details -
1) Whether the file is in metadata split-brain
2) Whether the file is in data split-brain

It will also list the name of afr-children to choose from. 
Something like :

Option0: client-0
Option1: client-1

We also tell the user what the user could do to view metadata/data 
info; like

stat to get metadata etc.

B) Now the user has to choose one of the options 
(client-x/client-y..) to

inspect the files.
e.g., setfattr -n trusted.afr.split-brain-choice -v client-0 

We save the read-child info in inode-ctx in order to provide the 
user access
to the file in split-brain from that child. Once the user inspects 
the file,
he proceeds to do the same from the other child of replica pair and 
makes an

informed decision.

C) Once the above steps are done, AFR is to be informed with the 
final choice

for source. This is achieved by -
(say the fresh copy is in client-0)
e.g., setfattr -n trusted.afr.split-brain-heal-finalize -v client-0

This child will be chosen as source and split-brain resolution will 
be done.
May I suggest another possible way to get around the difficulty in 
determining which of the files is the one to keep?


What if each of the files were to be renamed by appending the name of 
the brick-host that it lives on?

For example, in a replica 2 system:
brick-1: data1
host-1: host1
brick-2: date1
host-2: host2
file name: hair-pulling.txt

after running script/command to resolve split-brain, file system would 
have two files:

hair-pulling.txt__host-1_data1
hair-pulling.txt__host-2_data1
This doesn't seem so bad either. I will need to give it more thought to 
see if there are any problems.


the user would then delete the unwanted file and rename the wanted 
file back to hair-pulling.txt.


The only problem would come with a very large file with a large number 
of replicas (like the replica 5 system I am working with). You might 
run out of space for all the copies.


Otherwise, this seems (to me) to present a user-friendly way to do 
this.  If the user has doubts (and disk space), user can choose to 
keep the rejected file around for a while, "just in case" it happens 
to have something useful in it that is missing from the "accepted" file.



That brought another thought to mind (have not had reason to try it):
How does gluster cope if you go behind its back and rename a 
"rejected" file?  For instance, in my example above, what if I go 
directly on the brick and rename the host-2 copy of the file to 
hair-pulling.txt-dud?  The ideal scenario would seem to be that if 
user does a heal it would treat the copy as new file, see no dupe for 
hair-pulling.txt, and create a new dupe on host-2.  Since 
hair-pulling.txt-dud is also a new file, a dupe would be created on 
host-1.  User could then access files, verify correctness, and then 
delete hair-pulling.txt-dud.


*

This one won't work because of the reason Joe gave about gfid-hardlinks.

A not-officially-sanctioned way that I dealt with a split-brain a few 
versions back:

1. decided I wanted to keep file on host-2
2. log onto host-2
3. cp /brick/data1/hair-pulling.txt /gluster/data1/hair-pulling.txt-dud
4. rm /brick/data1/hair-pulling.txt
5. follow some Joe Julian blog stuff to delete the "invisible fork" of 
file

6. gluster volume heal data1 all
I believe that this did work for me at that time.  I have not had to 
do it on a recent gluster version.
This would work. You can check the document written by Ravi for this in 
the official tree: 
https://github.com/gluster/glusterfs/blob/master/doc/debugging/split-brain.md


Pranith


Ted Miller
Elkhart, IN

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] replica position

2015-02-03 Thread yang . bin18
On 01/30/2015 06:43 AM, yang.bi...@zte.com.cn wrote:

< 
收件人: yang.bi...@zte.com.cn, gluster-users@gluster.org, 
日期: 2015/01/28 16:40 
主题:Re: [Gluster-users] replica position 




On 01/27/2015 02:43 PM, yang.bi...@zte.com.cn wrote: 

Hi, 

Can the replica position be configured ? 
Could you give an example?

Pranith 

Regards. 

Bin.Yang 







ZTE Information Security Notice: The information contained in this mail 
(and any attachment transmitted herewith) is privileged and confidential 
and is intended for the exclusive use of the addressee(s).  If you are not 
an intended recipient, any disclosure, reproduction, distribution or other 
dissemination or use of the information contained is strictly prohibited. 
If you have received this mail in error, please delete it and notify us 
immediately.





___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users 




ZTE Information Security Notice: The information contained in this mail 
(and any attachment transmitted herewith) is privileged and confidential 
and is intended for the exclusive use of the addressee(s).  If you are not 
an intended recipient, any disclosure, reproduction, distribution or other 
dissemination or use of the information contained is strictly prohibited. 
If you have received this mail in error, please delete it and notify us 
immediately.






ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 答复: Re: replica position

2015-02-03 Thread Pranith Kumar Karampuri

On 01/30/2015 06:43 AM, yang.bi...@zte.com.cn wrote:
>
> example: brick A ,brick B,brick C compose one volume
>
> When client(fuse) in brick A write data into volume,for some reason
> ,we want the data wrote into brick B or brick C.
When a write is performed from a fuse mount, the response will not come
until the write happens on A, B, C. Order of the bricks does not matter.

Pranith
>
>
>
> 发件人: Pranith Kumar Karampuri 
> 收件人: yang.bi...@zte.com.cn, gluster-users@gluster.org,
> 日期: 2015/01/28 16:40
> 主题: Re: [Gluster-users] replica position
> 
>
>
>
>
> On 01/27/2015 02:43 PM, _yang.bi...@zte.com.cn_
> wrote:
>
> Hi,
>
> Can the replica position be configured ?
> Could you give an example?
>
> Pranith
>
> Regards.
>
> Bin.Yang
>
>   
>
>
>
>
>
> 
> ZTE Information Security Notice: The information contained in this
> mail (and any attachment transmitted herewith) is privileged and
> confidential and is intended for the exclusive use of the
> addressee(s). If you are not an intended recipient, any disclosure,
> reproduction, distribution or other dissemination or use of the
> information contained is strictly prohibited. If you have received
> this mail in error, please delete it and notify us immediately.
>
>
>
>
>
> ___
> Gluster-users mailing list
> _Gluster-users@gluster.org_ 
> _http://www.gluster.org/mailman/listinfo/gluster-users_
>
>
>
> 
> ZTE Information Security Notice: The information contained in this mail (and 
> any attachment transmitted herewith) is privileged and confidential and is 
> intended for the exclusive use of the addressee(s).  If you are not an 
> intended recipient, any disclosure, reproduction, distribution or other 
> dissemination or use of the information contained is strictly prohibited.  If 
> you have received this mail in error, please delete it and notify us 
> immediately.
>
>

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] missing files

2015-02-03 Thread David F. Robinson
Sorry.  Thought about this a little more. I should have been clearer.  
The files were on both bricks of the replica, not just one side.  So, 
both bricks had to have been up... The files/directories just don't show 
up on the mount.


I was reading and saw a related bug 
(https://bugzilla.redhat.com/show_bug.cgi?id=1159484).  I saw it 
suggested to run:


find  -d -exec getfattr -h -n trusted.ec.heal {} \;


I get a bunch of errors for operation not supported:

[root@gfs02a homegfs]# find wks_backup -d -exec getfattr -h -n 
trusted.ec.heal {} \;
find: warning: the -d option is deprecated; please use -depth instead, 
because the latter is a POSIX-compliant feature.

wks_backup/homer_backup/backup: trusted.ec.heal: Operation not supported
wks_backup/homer_backup/logs/2014_05_20.log: trusted.ec.heal: Operation 
not supported
wks_backup/homer_backup/logs/2014_05_21.log: trusted.ec.heal: Operation 
not supported
wks_backup/homer_backup/logs/2014_05_18.log: trusted.ec.heal: Operation 
not supported
wks_backup/homer_backup/logs/2014_05_19.log: trusted.ec.heal: Operation 
not supported
wks_backup/homer_backup/logs/2014_05_22.log: trusted.ec.heal: Operation 
not supported

wks_backup/homer_backup/logs: trusted.ec.heal: Operation not supported
wks_backup/homer_backup: trusted.ec.heal: Operation not supported

-- Original Message --
From: "Benjamin Turner" 
To: "David F. Robinson" 
Cc: "Gluster Devel" ; 
"gluster-users@gluster.org" 

Sent: 2/3/2015 7:12:34 PM
Subject: Re: [Gluster-devel] missing files

It sounds to me like the files were only copied to one replica, werent 
there for the initial for the initial ls which triggered a self heal, 
and were there for the last ls because they were healed.  Is there any 
chance that one of the replicas was down during the rsync?  It could be 
that you lost a brick during copy or something like that.  To confirm I 
would look for disconnects in the brick logs as well as checking 
glusterfshd.log to verify the missing files were actually healed.


-b

On Tue, Feb 3, 2015 at 5:37 PM, David F. Robinson 
 wrote:
I rsync'd 20-TB over to my gluster system and noticed that I had some 
directories missing even though the rsync completed normally.

The rsync logs showed that the missing files were transferred.

I went to the bricks and did an 'ls -al /data/brick*/homegfs/dir/*' 
the files were on the bricks.  After I did this 'ls', the files then 
showed up on the FUSE mounts.


1) Why are the files hidden on the fuse mount?
2) Why does the ls make them show up on the FUSE mount?
3) How can I prevent this from happening again?

Note, I also mounted the gluster volume using NFS and saw the same 
behavior.  The files/directories were not shown until I did the "ls" 
on the bricks.


David



===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
david.robin...@corvidtec.com
http://www.corvidtechnologies.com



___
Gluster-devel mailing list
gluster-de...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] missing files

2015-02-03 Thread David F. Robinson

Like these?

data-brick02a-homegfs.log:[2015-02-03 19:09:34.568842] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs02a.corvidtec.com-18563-2015/02/03-19:07:58:519134-homegfs-client-2-0-0
data-brick02a-homegfs.log:[2015-02-03 19:09:41.286551] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs01a.corvidtec.com-12804-2015/02/03-19:09:38:497808-homegfs-client-2-0-0
data-brick02a-homegfs.log:[2015-02-03 19:16:35.906412] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs02b.corvidtec.com-27190-2015/02/03-19:15:53:458467-homegfs-client-2-0-0
data-brick02a-homegfs.log:[2015-02-03 19:51:22.761293] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs01a.corvidtec.com-25926-2015/02/03-19:51:02:89070-homegfs-client-2-0-0
data-brick02a-homegfs.log:[2015-02-03 20:54:02.772180] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs01b.corvidtec.com-4175-2015/02/02-16:44:31:179119-homegfs-client-2-0-1
data-brick02a-homegfs.log:[2015-02-03 22:44:47.458905] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs01a.corvidtec.com-29467-2015/02/03-22:44:05:838129-homegfs-client-2-0-0
data-brick02a-homegfs.log:[2015-02-03 22:47:42.830866] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs01a.corvidtec.com-30069-2015/02/03-22:47:37:209436-homegfs-client-2-0-0
data-brick02a-homegfs.log:[2015-02-03 22:48:26.785931] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs01a.corvidtec.com-30256-2015/02/03-22:47:55:203659-homegfs-client-2-0-0
data-brick02a-homegfs.log:[2015-02-03 22:53:25.530836] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs01a.corvidtec.com-30658-2015/02/03-22:53:21:627538-homegfs-client-2-0-0
data-brick02a-homegfs.log:[2015-02-03 22:56:14.033823] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs01a.corvidtec.com-30893-2015/02/03-22:56:01:450507-homegfs-client-2-0-0
data-brick02a-homegfs.log:[2015-02-03 22:56:55.622800] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs01a.corvidtec.com-31080-2015/02/03-22:56:32:665370-homegfs-client-2-0-0
data-brick02a-homegfs.log:[2015-02-03 22:59:11.445742] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs01a.corvidtec.com-31383-2015/02/03-22:58:45:190874-homegfs-client-2-0-0
data-brick02a-homegfs.log:[2015-02-03 23:06:26.482709] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs01a.corvidtec.com-31720-2015/02/03-23:06:11:340012-homegfs-client-2-0-0
data-brick02a-homegfs.log:[2015-02-03 23:10:54.807725] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs01a.corvidtec.com-32083-2015/02/03-23:10:22:131678-homegfs-client-2-0-0
data-brick02a-homegfs.log:[2015-02-03 23:13:35.545513] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs01a.corvidtec.com-32284-2015/02/03-23:13:21:26552-homegfs-client-2-0-0
data-brick02a-homegfs.log:[2015-02-03 23:14:19.065271] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs01a.corvidtec.com-32471-2015/02/03-23:13:48:221126-homegfs-client-2-0-0
data-brick02a-homegfs.log:[2015-02-04 00:18:20.261428] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs01a.corvidtec.com-1369-2015/02/04-00:16:53:613570-homegfs-client-2-0-0


-- Original Message --
From: "Benjamin Turner" 
To: "David F. Robinson" 
Cc: "Gluster Devel" ; 
"gluster-users@gluster.org" 

Sent: 2/3/2015 7:12:34 PM
Subject: Re: [Gluster-devel] missing files

It sounds to me like the files were only copied to one replica, werent 
there for the initial for the initial ls which triggered a self heal, 
and were there for the last ls because they were healed.  Is there any 
chance that one of the replicas was down during the rsync?  It could be 
that you lost a brick during copy or something like that.  To confirm I 
would look for disconnects in the brick logs as well as checking 
glusterfshd.log to verify the missing files were actually healed.


-b

On Tue, Feb 3, 2015 at 5:37 PM, David F. Robinson 
 wrote:
I rsync'd 20-TB over to my gluster system and noticed that I had some 
directories missing even though the rsync completed normally.

The rsync logs showed that the missing files were transferred.

I went to the bricks and did an 'ls -al /data/brick*/homegfs/dir/*' 
the files were on the bricks.  After I did this 'ls', the files then 
showed up on the FUSE mounts.


1) Why are the files hidden on the fuse mount?
2) Why does the ls make them show up on the FUSE mount?
3) How can I prevent this from happening again?

Note, I also mounted the gluster

[Gluster-users] missing files

2015-02-03 Thread David F. Robinson
I rsync'd 20-TB over to my gluster system and noticed that I had some 
directories missing even though the rsync completed normally.

The rsync logs showed that the missing files were transferred.

I went to the bricks and did an 'ls -al /data/brick*/homegfs/dir/*' the 
files were on the bricks.  After I did this 'ls', the files then showed 
up on the FUSE mounts.


1) Why are the files hidden on the fuse mount?
2) Why does the ls make them show up on the FUSE mount?
3) How can I prevent this from happening again?

Note, I also mounted the gluster volume using NFS and saw the same 
behavior.  The files/directories were not shown until I did the "ls" on 
the bricks.


David



===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
david.robin...@corvidtec.com
http://www.corvidtechnologies.com

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] ... i was able to produce a split brain...

2015-02-03 Thread Ted Miller


On 2/3/2015 2:44 PM, Joe Julian wrote:


On 02/03/2015 11:34 AM, Ted Miller wrote:


On 2/3/2015 12:23 PM, Joe Julian wrote:



That brought another thought to mind (have not had reason to try it):
How does gluster cope if you go behind its back and rename a "rejected" 
file?  For instance, in my example above, what if I go directly on the 
brick and rename the host-2 copy of the file to hair-pulling.txt-dud?  
The ideal scenario would seem to be that if user does a heal it would 
treat the copy as new file, see no dupe for hair-pulling.txt, and create 
a new dupe on host-2.  Since hair-pulling.txt-dud is also a new file, a 
dupe would be created on host-1.  User could then access files, verify 
correctness, and then delete hair-pulling.txt-dud.
This should cause you to have two files with the same gfid. This will 
create the hardlink in .glusterfs again, and the heal will then re-create 
the .txt file also with that same gfid. Since both files will have the 
same gfid (stored in extended attributes) and be hard linked to the same 
file under .glusterfs you should then end up with both files being 
split-brain. 
Joe, I moved you comments up to be closest to the proposal they seem 
relevant to.


*
A not-officially-sanctioned way that I dealt with a split-brain a few 
versions back:

1. decided I wanted to keep file on host-2
2. log onto host-2
3. cp /brick/data1/hair-pulling.txt /gluster/data1/hair-pulling.txt-dud
4. rm /brick/data1/hair-pulling.txt
5. follow some Joe Julian blog stuff to delete the "invisible fork" of file
6. gluster volume heal data1 all
If you note, in the above scenario _I copied from the brick to the mounted 
gluster volume_.  I believe that this forces the breaking of any linkage 
between the old file and the new one.  Am I missing something there?

Yep, I missed that. I seem to be suffering from split-brain, myself, today.

Don't sweat it, your blog posts have dug me out of more than one hole.

Ted Miller
Elkhart, IN, USA


.

//
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 答复: Re: replica position

2015-02-03 Thread Ted Miller


On 1/29/2015 8:13 PM, yang.bi...@zte.com.cn wrote:


example: brick A ,brick B,brick C compose one volume

When client(fuse) in brick A write data into volume,for some reason ,we 
want the data  wrote into brick B or brick C.
Are you considering a replicated volume?  If so, the _client_ will write to 
all three volumes at the same time.


If you are looking at a distributed volume, then each file ends up on only 
one node, but file writing is distributed across different nodes.


Ted Miller
Elkhart, IN, USA




发件人: Pranith Kumar Karampuri 
收件人: yang.bi...@zte.com.cn, gluster-users@gluster.org,
日期: 2015/01/28 16:40
主题: Re: [Gluster-users] replica position
-




On 01/27/2015 02:43 PM, _yang.bi...@zte.com.cn_ 
wrote:


Hi,

Can the replica position be configured ?
Could you give an example?

Pranith

Regards.

Bin.Yang








ZTE Information Security Notice: The information contained in this mail 
(and any attachment transmitted herewith) is privileged and confidential 
and is intended for the exclusive use of the addressee(s).  If you are not 
an intended recipient, any disclosure, reproduction, distribution or other 
dissemination or use of the information contained is strictly prohibited. 
 If you have received this mail in error, please delete it and notify us 
immediately.






___
Gluster-users mailing list
_Gluster-users@gluster.org_ 
_http://www.gluster.org/mailman/listinfo/gluster-users_




ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


--
*Ted Miller*, Design Engineer
*SonSet Solutions*
(formerly HCJB Global Technology Center)
my desk +1 574.970.4272
receptionist +1 574.972.4252
http://sonsetsolutions.org

/Technology for abundant life!/
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] ... i was able to produce a split brain...

2015-02-03 Thread Joe Julian


On 02/03/2015 11:34 AM, Ted Miller wrote:


On 2/3/2015 12:23 PM, Joe Julian wrote:



That brought another thought to mind (have not had reason to try it):
How does gluster cope if you go behind its back and rename a 
"rejected" file?  For instance, in my example above, what if I go 
directly on the brick and rename the host-2 copy of the file to 
hair-pulling.txt-dud?  The ideal scenario would seem to be that if 
user does a heal it would treat the copy as new file, see no dupe 
for hair-pulling.txt, and create a new dupe on host-2.  Since 
hair-pulling.txt-dud is also a new file, a dupe would be created on 
host-1.  User could then access files, verify correctness, and then 
delete hair-pulling.txt-dud.
This should cause you to have two files with the same gfid. This will 
create the hardlink in .glusterfs again, and the heal will then 
re-create the .txt file also with that same gfid. Since both files 
will have the same gfid (stored in extended attributes) and be hard 
linked to the same file under .glusterfs you should then end up with 
both files being split-brain. 
Joe, I moved you comments up to be closest to the proposal they seem 
relevant to.


*
A not-officially-sanctioned way that I dealt with a split-brain a 
few versions back:

1. decided I wanted to keep file on host-2
2. log onto host-2
3. cp /brick/data1/hair-pulling.txt /gluster/data1/hair-pulling.txt-dud
4. rm /brick/data1/hair-pulling.txt
5. follow some Joe Julian blog stuff to delete the "invisible fork" 
of file

6. gluster volume heal data1 all
If you note, in the above scenario _I copied from the brick to the 
mounted gluster volume_.  I believe that this forces the breaking of 
any linkage between the old file and the new one.  Am I missing 
something there?

Yep, I missed that. I seem to be suffering from split-brain, myself, today.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] ... i was able to produce a split brain...

2015-02-03 Thread Ted Miller


On 2/3/2015 12:23 PM, Joe Julian wrote:



That brought another thought to mind (have not had reason to try it):
How does gluster cope if you go behind its back and rename a "rejected" 
file?  For instance, in my example above, what if I go directly on the 
brick and rename the host-2 copy of the file to hair-pulling.txt-dud?  The 
ideal scenario would seem to be that if user does a heal it would treat 
the copy as new file, see no dupe for hair-pulling.txt, and create a new 
dupe on host-2.  Since hair-pulling.txt-dud is also a new file, a dupe 
would be created on host-1.  User could then access files, verify 
correctness, and then delete hair-pulling.txt-dud.
This should cause you to have two files with the same gfid. This will 
create the hardlink in .glusterfs again, and the heal will then re-create 
the .txt file also with that same gfid. Since both files will have the same 
gfid (stored in extended attributes) and be hard linked to the same file 
under .glusterfs you should then end up with both files being split-brain. 

Joe, I moved you comments up to be closest to the proposal they seem relevant 
to.


*
A not-officially-sanctioned way that I dealt with a split-brain a few 
versions back:

1. decided I wanted to keep file on host-2
2. log onto host-2
3. cp /brick/data1/hair-pulling.txt /gluster/data1/hair-pulling.txt-dud
4. rm /brick/data1/hair-pulling.txt
5. follow some Joe Julian blog stuff to delete the "invisible fork" of file
6. gluster volume heal data1 all
If you note, in the above scenario I copied from the brick to the mounted 
gluster volume.  I believe that this forces the breaking of any linkage 
between the old file and the new one.  Am I missing something there?


Ted Miller
Elkhart, IN

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster 3.6.2 ls issues

2015-02-03 Thread David F. Robinson

Cancel this issue.  I found the problem.  Explanation below...

We use active directory to manage our user accounts; however, open sssd 
doesn't seem to play well with gluster.  When I turn it on, the cpu load 
shoots up to between 80-100% and stays there (previously submitted bug 
report).  So, we I did on my gluster machines to keep the uid/gid 
updated (required due to server.manage-gids=on), is write a script that 
start opensssd, grabs all of the groups/users from the server, parses 
out the /etc/group and /etc/passwd file, and then shuts down sssd.  I 
didn't realize that sssd uses the locally cached file.  My script was 
running faster than sssd was updating the cache file, so this particular 
user wasn't in the SBIR group on all of the machines.  He was in that 
group on gfs01a, but not on gfs01b (replica pair) or gfs02a/02b.  I 
guess this gave him enough permission to cd into the directory, but for 
some strange reason he couldn't do an "ls" and have the directory name 
show up.


The only reason I do any of this is because I had to use 
server.manage-gids to overcome the 32-group limitation.  This requires 
that my storage system have all of the user accounts and groups.  The 
preferred option would be to simply use sssd on my storage systems, but 
it doesn't seem to play well with gluster.


David


-- Original Message --
From: "David F. Robinson" 
To: "Gluster Devel" ; 
"gluster-users@gluster.org" 

Sent: 2/3/2015 12:56:40 PM
Subject: gluster 3.6.2 ls issues

On my gluster filesystem mount, I have a user who does an "ls" and all 
of the directories do not show up.  Not that the A15-029 directory 
doesn't show up.  However, as kbutz I can cd into the directory.


As root (also tested as several other users), I get the following from 
an "ls -al"

[root@sb1 2015.1]# ls -al
total 16
drwxrws--x 13 streadway sbir   868 Feb  3 12:48 .
drwxrws--- 46 root  sbir 16384 Feb  3 10:50 ..
drwxrws--x  5 cczechsbir   606 Jan 30 12:58 A15-007
drwxrws--x  5 kbutz sbir   291 Feb  3 12:11 A15-029
drwxrws--x  3 randerson sbir   219 Feb  3 11:52 A15-063
drwxrws--x  4 abirnbaum sbir   223 Feb  3 10:14 A15-088
drwxrws--x  2 anelson   sbir   270 Jan 27 14:30 AF151-058
drwxrws--x  3 tanderson sbir   216 Jan 28 09:43 AF151-072
drwxrws--x  3 streadway sbir   162 Jan 21 13:28 AF151-102
drwxrws--x  4 aaronward sbir   493 Feb  3 09:58 AF151-114
drwxrws--x  3 streadway sbir   162 Feb  3 12:07 AF151-174
drwxrws--x  3 dstowesbir   192 Jan 27 12:25 AF15-AT28
drwxrws--x  3 kboyett   sbir   199 Jan 28 09:43 NASA
As user kburz, I get the following:
sb1:sbir/2015.1> ls -al
total 16
drwxrws--x 13 streadway sbir   868 Feb  3 12:48 ./
drwxrws--- 46 root  sbir 16384 Feb  3 10:50 ../
drwxrws--x  3 randerson sbir   219 Feb  3 11:52 A15-063/
drwxrws--x  4 abirnbaum sbir   223 Feb  3 10:14 A15-088/
drwxrws--x  2 anelson   sbir   270 Jan 27 14:30 AF151-058/
drwxrws--x  3 streadway sbir   162 Jan 21 13:28 AF151-102/
drwxrws--x  3 streadway sbir   162 Feb  3 12:07 AF151-174/
drwxrws--x  3 kboyett   sbir   199 Jan 28 09:43 NASA/
Note, that I can still cd into the non-listed directory as kbutz:

[kbutz@sb1 ~]$ cd /homegfs/documentation/programs/sbir/2015.1
A15-063/  A15-088/  AF151-058/  AF151-102/  AF151-174/  NASA/

sb1:sbir/2015.1> cd A15-029
A15-029_proposal_draft_rev1.docx*  CB_work/  gun_work/  Refs/

David

===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
david.robin...@corvidtec.com
http://www.corvidtechnologies.com

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] gluster 3.6.2 ls issues

2015-02-03 Thread David F. Robinson
On my gluster filesystem mount, I have a user who does an "ls" and all 
of the directories do not show up.  Not that the A15-029 directory 
doesn't show up.  However, as kbutz I can cd into the directory.


As root (also tested as several other users), I get the following from 
an "ls -al"

[root@sb1 2015.1]# ls -al
total 16
drwxrws--x 13 streadway sbir   868 Feb  3 12:48 .
drwxrws--- 46 root  sbir 16384 Feb  3 10:50 ..
drwxrws--x  5 cczechsbir   606 Jan 30 12:58 A15-007
drwxrws--x  5 kbutz sbir   291 Feb  3 12:11 A15-029
drwxrws--x  3 randerson sbir   219 Feb  3 11:52 A15-063
drwxrws--x  4 abirnbaum sbir   223 Feb  3 10:14 A15-088
drwxrws--x  2 anelson   sbir   270 Jan 27 14:30 AF151-058
drwxrws--x  3 tanderson sbir   216 Jan 28 09:43 AF151-072
drwxrws--x  3 streadway sbir   162 Jan 21 13:28 AF151-102
drwxrws--x  4 aaronward sbir   493 Feb  3 09:58 AF151-114
drwxrws--x  3 streadway sbir   162 Feb  3 12:07 AF151-174
drwxrws--x  3 dstowesbir   192 Jan 27 12:25 AF15-AT28
drwxrws--x  3 kboyett   sbir   199 Jan 28 09:43 NASA
As user kburz, I get the following:
sb1:sbir/2015.1> ls -al
total 16
drwxrws--x 13 streadway sbir   868 Feb  3 12:48 ./
drwxrws--- 46 root  sbir 16384 Feb  3 10:50 ../
drwxrws--x  3 randerson sbir   219 Feb  3 11:52 A15-063/
drwxrws--x  4 abirnbaum sbir   223 Feb  3 10:14 A15-088/
drwxrws--x  2 anelson   sbir   270 Jan 27 14:30 AF151-058/
drwxrws--x  3 streadway sbir   162 Jan 21 13:28 AF151-102/
drwxrws--x  3 streadway sbir   162 Feb  3 12:07 AF151-174/
drwxrws--x  3 kboyett   sbir   199 Jan 28 09:43 NASA/
Note, that I can still cd into the non-listed directory as kbutz:

[kbutz@sb1 ~]$ cd /homegfs/documentation/programs/sbir/2015.1
A15-063/  A15-088/  AF151-058/  AF151-102/  AF151-174/  NASA/

sb1:sbir/2015.1> cd A15-029
A15-029_proposal_draft_rev1.docx*  CB_work/  gun_work/  Refs/

David

===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
david.robin...@corvidtec.com
http://www.corvidtechnologies.com

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] ... i was able to produce a split brain...

2015-02-03 Thread Joe Julian





That brought another thought to mind (have not had reason to try it):
How does gluster cope if you go behind its back and rename a 
"rejected" file?  For instance, in my example above, what if I go 
directly on the brick and rename the host-2 copy of the file to 
hair-pulling.txt-dud?  The ideal scenario would seem to be that if 
user does a heal it would treat the copy as new file, see no dupe for 
hair-pulling.txt, and create a new dupe on host-2.  Since 
hair-pulling.txt-dud is also a new file, a dupe would be created on 
host-1.  User could then access files, verify correctness, and then 
delete hair-pulling.txt-dud.


*
A not-officially-sanctioned way that I dealt with a split-brain a few 
versions back:

1. decided I wanted to keep file on host-2
2. log onto host-2
3. cp /brick/data1/hair-pulling.txt /gluster/data1/hair-pulling.txt-dud
4. rm /brick/data1/hair-pulling.txt
5. follow some Joe Julian blog stuff to delete the "invisible fork" of 
file

6. gluster volume heal data1 all


This should cause you to have two files with the same gfid. This will 
create the hardlink in .glusterfs again, and the heal will then 
re-create the .txt file also with that same gfid. Since both files will 
have the same gfid (stored in extended attributes) and be hard linked to 
the same file under .glusterfs you should then end up with both files 
being split-brain.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] ... i was able to produce a split brain...

2015-02-03 Thread Ted Miller


On 1/31/2015 12:47 AM, Pranith Kumar Karampuri wrote:


On 01/30/2015 06:28 PM, Jeff Darcy wrote:

Pranith and I had a discussion regarding this issue and here is what we have
in our mind right now.

We plan to provide the user commands to execute from mount so that he can
access the files in split-brain. This way he can choose which copy is to be
used as source. The user will have to perform a set of getfattrs and
setfattrs (on virtual xattrs) to decide which child to choose as source and
inform AFR with his decision.

A) To know the split-brain status :
getfattr -n trusted.afr.split-brain-status 

This will provide user with the following details -
1) Whether the file is in metadata split-brain
2) Whether the file is in data split-brain

It will also list the name of afr-children to choose from. Something like :
Option0: client-0
Option1: client-1

We also tell the user what the user could do to view metadata/data info; 
like

stat to get metadata etc.

B) Now the user has to choose one of the options (client-x/client-y..) to
inspect the files.
e.g., setfattr -n trusted.afr.split-brain-choice -v client-0 
We save the read-child info in inode-ctx in order to provide the user access
to the file in split-brain from that child. Once the user inspects the file,
he proceeds to do the same from the other child of replica pair and makes an
informed decision.

C) Once the above steps are done, AFR is to be informed with the final 
choice

for source. This is achieved by -
(say the fresh copy is in client-0)
e.g., setfattr -n trusted.afr.split-brain-heal-finalize -v client-0

This child will be chosen as source and split-brain resolution will be done.
May I suggest another possible way to get around the difficulty in 
determining which of the files is the one to keep?


What if each of the files were to be renamed by appending the name of the 
brick-host that it lives on?

For example, in a replica 2 system:
brick-1: data1
host-1: host1
brick-2: date1
host-2: host2
file name: hair-pulling.txt

after running script/command to resolve split-brain, file system would have 
two files:

hair-pulling.txt__host-1_data1
hair-pulling.txt__host-2_data1

the user would then delete the unwanted file and rename the wanted file back 
to hair-pulling.txt.


The only problem would come with a very large file with a large number of 
replicas (like the replica 5 system I am working with). You might run out of 
space for all the copies.


Otherwise, this seems (to me) to present a user-friendly way to do this.  If 
the user has doubts (and disk space), user can choose to keep the rejected 
file around for a while, "just in case" it happens to have something useful 
in it that is missing from the "accepted" file.



That brought another thought to mind (have not had reason to try it):
How does gluster cope if you go behind its back and rename a "rejected" 
file?  For instance, in my example above, what if I go directly on the brick 
and rename the host-2 copy of the file to hair-pulling.txt-dud?  The ideal 
scenario would seem to be that if user does a heal it would treat the copy as 
new file, see no dupe for hair-pulling.txt, and create a new dupe on host-2.  
Since hair-pulling.txt-dud is also a new file, a dupe would be created on 
host-1.  User could then access files, verify correctness, and then delete 
hair-pulling.txt-dud.


*
A not-officially-sanctioned way that I dealt with a split-brain a few 
versions back:

1. decided I wanted to keep file on host-2
2. log onto host-2
3. cp /brick/data1/hair-pulling.txt /gluster/data1/hair-pulling.txt-dud
4. rm /brick/data1/hair-pulling.txt
5. follow some Joe Julian blog stuff to delete the "invisible fork" of file
6. gluster volume heal data1 all
I believe that this did work for me at that time.  I have not had to do it on 
a recent gluster version.


Ted Miller
Elkhart, IN

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Diagnosing Intermittent Performance Problems Possibly Caused by Gremlins

2015-02-03 Thread Matt


I’ve been trying for weeks to reproduce the performance problems in 
our preproduction environments but can’t. As a result, selling that 
just upgrading to 3.6.x and hoping it goes away might be tricky. 3.6 is 
perceived as a little too bleeding edge, and we’ve actually had some 
other not fully explained issues with this cluster recently that make 
us hesitate. I don’t think they’re related.


On Tue, Feb 3, 2015 at 4:58 AM, Justin Clift  wrote:

- Original Message -

 Hello List,

 So I've been frustraded by intermittent performance problems 
throughout
 January. The problem occurs on a two node setup running 3.4.5, 16 
gigs
 of ram with a bunch of local disk. For sometimes an hour for 
sometimes
 weeks at a time (I have extensive graphs in OpenNMS) our Gluster 
boxes
 will get their CPUs pegged, and in vmstat they'll show extremely 
high

 numbers of context switches and interrupts. Eventually things calm
 down. During this time, memory usage actually drops. Overall usage 
on
 the box goes from between 6-10 gigs to right around 4 gigs, and 
stays

 there. That's what really puzzles me.

 When performance is problematic, sar shows one device, the device
 corresponding to the glusterfsd problem using all the CPU doing 
lots of
 little reads, Sometimes 70k/second, very small avg rq size, say 
10-12.

 Afraid I don't have any saved output handy, but I can try to capture
 some next time it happens. I have tons of information frankly, but 
am

 trying to keep this reasonably brief.

 There are more than a dozen volumes on this two node setup. The CPU
 usage is pretty much entirely contained to one volume, a 1.5 TB 
volume
 that is just shy of 70% full. It stores uploaded files for a web 
app.
 What I hate about this app and so am always suspicious of, is that 
it

 stores a directory for every user in one level, so under the /data
 directory in the volume, there are 450,000 sub directories at this
 point.

 The only real mitigation step that's been taken so far was to turn 
off

 the self-heal daemon on the volume, as I thought maybe crawling that
 large directory was getting expensive. This doesn't seem to have 
done

 anything as the problem still occurs.

 At this point I figure there are one of two things sorts of things
 happening really broadly: one we're running into some sort of bug or
 performance problem with gluster we should either fix perhaps by
 upgrading or tuning around, or two, some process we're running but 
not

 aware of is hammering the file system causing problems.

 If it's the latter option, can anyone give me any tips on figuring 
out
 what might be hammering the system? I can use volume top to see 
what a

 brick is doing, but I can't figure out how to tell what clients are
 doing what.

 Apologies for the somewhat broad nature of the question, any input
 thoughts would be much appreciated. I can certainly provide more 
info
 about some things if it would help, but I've tried not to write a 
novel

 here.


Out of curiosity, are you able to test using GlusterFS 3.6.2?  We've
had a bunch of pretty in-depth upstream testing at decent scale (100+
nodes) from 3.5.x onwards, with lots of performance issues identified
and fixed on the way through.

So, I'm kinda hopeful the problem you're describing is fixed in newer
releases. :D

Regards and best wishes,

Justin Clift

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster 3.6.2 "no data available" error - selinux?

2015-02-03 Thread Raghavendra Bhat

On Tuesday 03 February 2015 06:36 PM, Kingsley wrote:

On Tue, 2015-02-03 at 12:14 +0530, Raghavendra Bhat wrote:

Hi Kingsley,

I will be making a beta release of 3.6.3 by the end of this week.

Regards,
Raghavendra Bhat

Hi,

That's great news.

Is there a page somewhere that lists the changes/fixes included in
3.6.3? I've found a list of bugs on bugzilla but that doesn't
specifically say what's to change in 3.6.3. No panic if there isn't
already a list handy.



There is a tracker bug for 3.6.3 where the bugs that are needed to be 
fixed (or that have been fixed) for 3.6.3 are being added.

https://bugzilla.redhat.com/show_bug.cgi?id=1184460

Regards,
Raghavendra Bhat
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster 3.6.2 "no data available" error - selinux?

2015-02-03 Thread Kingsley
On Tue, 2015-02-03 at 12:14 +0530, Raghavendra Bhat wrote:
> Hi Kingsley,
> 
> I will be making a beta release of 3.6.3 by the end of this week.
> 
> Regards,
> Raghavendra Bhat

Hi,

That's great news.

Is there a page somewhere that lists the changes/fixes included in
3.6.3? I've found a list of bugs on bugzilla but that doesn't
specifically say what's to change in 3.6.3. No panic if there isn't
already a list handy.

-- 
Cheers,
Kingsley.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] REMINDER: Gluster Bug Triage meeting today (12:00 UTC)

2015-02-03 Thread Lalatendu Mohanty

Hi all,

please join the #gluster-meeting IRC channel on irc.freenode.net to
participate on the following topics:

* Roll call
* Status of last weeks action items
* Group Triage
* Open Floor

More details on the above, and last minute changes to the agenda are
kept in the etherpad for this meeting:
-https://public.pad.fsfe.org/p/gluster-bug-triage

The meeting starts at 12:00 UTC, you can convert that to your own
time-zone with the 'date' command:

$ date -d "12:00 UTC

Thanks,
Lala

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Diagnosing Intermittent Performance Problems Possibly Caused by Gremlins

2015-02-03 Thread Justin Clift
- Original Message -
> Hello List,
> 
> So I've been frustraded by intermittent performance problems throughout
> January. The problem occurs on a two node setup running 3.4.5, 16 gigs
> of ram with a bunch of local disk. For sometimes an hour for sometimes
> weeks at a time (I have extensive graphs in OpenNMS) our Gluster boxes
> will get their CPUs pegged, and in vmstat they'll show extremely high
> numbers of context switches and interrupts. Eventually things calm
> down. During this time, memory usage actually drops. Overall usage on
> the box goes from between 6-10 gigs to right around 4 gigs, and stays
> there. That's what really puzzles me.
> 
> When performance is problematic, sar shows one device, the device
> corresponding to the glusterfsd problem using all the CPU doing lots of
> little reads, Sometimes 70k/second, very small avg rq size, say 10-12.
> Afraid I don't have any saved output handy, but I can try to capture
> some next time it happens. I have tons of information frankly, but am
> trying to keep this reasonably brief.
> 
> There are more than a dozen volumes on this two node setup. The CPU
> usage is pretty much entirely contained to one volume, a 1.5 TB volume
> that is just shy of 70% full. It stores uploaded files for a web app.
> What I hate about this app and so am always suspicious of, is that it
> stores a directory for every user in one level, so under the /data
> directory in the volume, there are 450,000 sub directories at this
> point.
> 
> The only real mitigation step that's been taken so far was to turn off
> the self-heal daemon on the volume, as I thought maybe crawling that
> large directory was getting expensive. This doesn't seem to have done
> anything as the problem still occurs.
> 
> At this point I figure there are one of two things sorts of things
> happening really broadly: one we're running into some sort of bug or
> performance problem with gluster we should either fix perhaps by
> upgrading or tuning around, or two, some process we're running but not
> aware of is hammering the file system causing problems.
> 
> If it's the latter option, can anyone give me any tips on figuring out
> what might be hammering the system? I can use volume top to see what a
> brick is doing, but I can't figure out how to tell what clients are
> doing what.
> 
> Apologies for the somewhat broad nature of the question, any input
> thoughts would be much appreciated. I can certainly provide more info
> about some things if it would help, but I've tried not to write a novel
> here.

Out of curiosity, are you able to test using GlusterFS 3.6.2?  We've
had a bunch of pretty in-depth upstream testing at decent scale (100+
nodes) from 3.5.x onwards, with lots of performance issues identified
and fixed on the way through.

So, I'm kinda hopeful the problem you're describing is fixed in newer
releases. :D

Regards and best wishes,

Justin Clift

-- 
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster quota showing wrong space utilization (showing 16384.0 PB)

2015-02-03 Thread Omkar Kulkarni
Hi Vijay,

I was recreate the issue.
1) Create a replicated volume with 2 CentOS 6.5 machines

i)gluster volume create homedir replica 2 transport tcp
node1:/brick1/homedir node2:/brick1/homedir


gluster  volume set homedir network.ping-timeout 10

gluster  volume set homedir performance.cache-size 1GB

gluster  volume set homedir nfs.rpc-auth-allow 10.99.30.20,10.99.30.21

gluster  volume set homedir auth.allow 10.99.30.20,10.99.30.21

gluster  volume set homedir features.quota on

gluster  volume set homedir features.quota-timeout 0

gluster  volume set homedir features.quota-deem-statfs on


ii) setup quotas on around 900 folders


node1:~# gluster volume quota homedirlist |wc -l

903


iii) Created a rsync to create/delete files with different names  in 20
different folders and ran every 2 minutes

iv) in Around 2 days some directories showed PB in files used



On Fri, Jan 23, 2015 at 1:01 AM, Vijaikumar M  wrote:

>  Hi Omar,
>
> If the issue is happening consistently, can we get a re-creatable
> test-case for this same?
>
> Thanks,
> Vijay
>
>
> On Wednesday 21 January 2015 02:47 AM, Omkar Kulkarni wrote:
>
> Hi Guys,
>
>  I am using gluster 3.5.2 and for couple of directories I am getting used
> space as 16384.0 PB
>
>   node1:~# gluster volume quota homedir list |grep PB
>
> /storage/home1  3.0GB   90%   16384.0PB   3.0GB
>
>  /storage/home2 1.0GB   90%   16384.0PB
> 1.0GB
>
>
>
>  DU on actual directory
>
>
>  node1:~#  du -hs /data/storage/home1/
>
>  368K/data/storage/home1/
>
>
>  node1:~#  du -hs /data/storage/home2
>
>  546K/data/storage/home2
>
>
>  I am using replicated volume (replica count 2) and mounted volume as
> nfs. Individual bricks are formatted as ext4. If I remove quotas and
> restart replicated volume and then re-add quotas then gluster quota shows
> correct value (instead of 16384.0 PB) but once file transfer starts and
> files are added/deleted again these directories start showing incorrect
> values.
>
>
>  Just wanted to check if this is a bug in gluster quota and/or if there
> is any way to mitigate this?
>
>
>
>  I do see a similar bug but for 3.4
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1061068
>
>
>  Thanks
>
>
>
>
>
>
>
>
>
>
>
> ___
> Gluster-users mailing 
> listGluster-users@gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-users
>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users