Re: [Gluster-users] ... i was able to produce a split brain...

2015-02-03 Thread Pranith Kumar Karampuri


On 02/03/2015 10:42 PM, Ted Miller wrote:


On 1/31/2015 12:47 AM, Pranith Kumar Karampuri wrote:


On 01/30/2015 06:28 PM, Jeff Darcy wrote:
Pranith and I had a discussion regarding this issue and here is 
what we have

in our mind right now.

We plan to provide the user commands to execute from mount so that 
he can
access the files in split-brain. This way he can choose which copy 
is to be

used as source. The user will have to perform a set of getfattrs and
setfattrs (on virtual xattrs) to decide which child to choose as 
source and

inform AFR with his decision.

A) To know the split-brain status :
getfattr -n trusted.afr.split-brain-status 

This will provide user with the following details -
1) Whether the file is in metadata split-brain
2) Whether the file is in data split-brain

It will also list the name of afr-children to choose from. 
Something like :

Option0: client-0
Option1: client-1

We also tell the user what the user could do to view metadata/data 
info; like

stat to get metadata etc.

B) Now the user has to choose one of the options 
(client-x/client-y..) to

inspect the files.
e.g., setfattr -n trusted.afr.split-brain-choice -v client-0 

We save the read-child info in inode-ctx in order to provide the 
user access
to the file in split-brain from that child. Once the user inspects 
the file,
he proceeds to do the same from the other child of replica pair and 
makes an

informed decision.

C) Once the above steps are done, AFR is to be informed with the 
final choice

for source. This is achieved by -
(say the fresh copy is in client-0)
e.g., setfattr -n trusted.afr.split-brain-heal-finalize -v client-0

This child will be chosen as source and split-brain resolution will 
be done.
May I suggest another possible way to get around the difficulty in 
determining which of the files is the one to keep?


What if each of the files were to be renamed by appending the name of 
the brick-host that it lives on?

For example, in a replica 2 system:
brick-1: data1
host-1: host1
brick-2: date1
host-2: host2
file name: hair-pulling.txt

after running script/command to resolve split-brain, file system would 
have two files:

hair-pulling.txt__host-1_data1
hair-pulling.txt__host-2_data1
This doesn't seem so bad either. I will need to give it more thought to 
see if there are any problems.


the user would then delete the unwanted file and rename the wanted 
file back to hair-pulling.txt.


The only problem would come with a very large file with a large number 
of replicas (like the replica 5 system I am working with). You might 
run out of space for all the copies.


Otherwise, this seems (to me) to present a user-friendly way to do 
this.  If the user has doubts (and disk space), user can choose to 
keep the rejected file around for a while, "just in case" it happens 
to have something useful in it that is missing from the "accepted" file.



That brought another thought to mind (have not had reason to try it):
How does gluster cope if you go behind its back and rename a 
"rejected" file?  For instance, in my example above, what if I go 
directly on the brick and rename the host-2 copy of the file to 
hair-pulling.txt-dud?  The ideal scenario would seem to be that if 
user does a heal it would treat the copy as new file, see no dupe for 
hair-pulling.txt, and create a new dupe on host-2.  Since 
hair-pulling.txt-dud is also a new file, a dupe would be created on 
host-1.  User could then access files, verify correctness, and then 
delete hair-pulling.txt-dud.


*

This one won't work because of the reason Joe gave about gfid-hardlinks.

A not-officially-sanctioned way that I dealt with a split-brain a few 
versions back:

1. decided I wanted to keep file on host-2
2. log onto host-2
3. cp /brick/data1/hair-pulling.txt /gluster/data1/hair-pulling.txt-dud
4. rm /brick/data1/hair-pulling.txt
5. follow some Joe Julian blog stuff to delete the "invisible fork" of 
file

6. gluster volume heal data1 all
I believe that this did work for me at that time.  I have not had to 
do it on a recent gluster version.
This would work. You can check the document written by Ravi for this in 
the official tree: 
https://github.com/gluster/glusterfs/blob/master/doc/debugging/split-brain.md


Pranith


Ted Miller
Elkhart, IN

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] ... i was able to produce a split brain...

2015-02-03 Thread Ted Miller


On 2/3/2015 2:44 PM, Joe Julian wrote:


On 02/03/2015 11:34 AM, Ted Miller wrote:


On 2/3/2015 12:23 PM, Joe Julian wrote:



That brought another thought to mind (have not had reason to try it):
How does gluster cope if you go behind its back and rename a "rejected" 
file?  For instance, in my example above, what if I go directly on the 
brick and rename the host-2 copy of the file to hair-pulling.txt-dud?  
The ideal scenario would seem to be that if user does a heal it would 
treat the copy as new file, see no dupe for hair-pulling.txt, and create 
a new dupe on host-2.  Since hair-pulling.txt-dud is also a new file, a 
dupe would be created on host-1.  User could then access files, verify 
correctness, and then delete hair-pulling.txt-dud.
This should cause you to have two files with the same gfid. This will 
create the hardlink in .glusterfs again, and the heal will then re-create 
the .txt file also with that same gfid. Since both files will have the 
same gfid (stored in extended attributes) and be hard linked to the same 
file under .glusterfs you should then end up with both files being 
split-brain. 
Joe, I moved you comments up to be closest to the proposal they seem 
relevant to.


*
A not-officially-sanctioned way that I dealt with a split-brain a few 
versions back:

1. decided I wanted to keep file on host-2
2. log onto host-2
3. cp /brick/data1/hair-pulling.txt /gluster/data1/hair-pulling.txt-dud
4. rm /brick/data1/hair-pulling.txt
5. follow some Joe Julian blog stuff to delete the "invisible fork" of file
6. gluster volume heal data1 all
If you note, in the above scenario _I copied from the brick to the mounted 
gluster volume_.  I believe that this forces the breaking of any linkage 
between the old file and the new one.  Am I missing something there?

Yep, I missed that. I seem to be suffering from split-brain, myself, today.

Don't sweat it, your blog posts have dug me out of more than one hole.

Ted Miller
Elkhart, IN, USA


.

//
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] ... i was able to produce a split brain...

2015-02-03 Thread Joe Julian


On 02/03/2015 11:34 AM, Ted Miller wrote:


On 2/3/2015 12:23 PM, Joe Julian wrote:



That brought another thought to mind (have not had reason to try it):
How does gluster cope if you go behind its back and rename a 
"rejected" file?  For instance, in my example above, what if I go 
directly on the brick and rename the host-2 copy of the file to 
hair-pulling.txt-dud?  The ideal scenario would seem to be that if 
user does a heal it would treat the copy as new file, see no dupe 
for hair-pulling.txt, and create a new dupe on host-2.  Since 
hair-pulling.txt-dud is also a new file, a dupe would be created on 
host-1.  User could then access files, verify correctness, and then 
delete hair-pulling.txt-dud.
This should cause you to have two files with the same gfid. This will 
create the hardlink in .glusterfs again, and the heal will then 
re-create the .txt file also with that same gfid. Since both files 
will have the same gfid (stored in extended attributes) and be hard 
linked to the same file under .glusterfs you should then end up with 
both files being split-brain. 
Joe, I moved you comments up to be closest to the proposal they seem 
relevant to.


*
A not-officially-sanctioned way that I dealt with a split-brain a 
few versions back:

1. decided I wanted to keep file on host-2
2. log onto host-2
3. cp /brick/data1/hair-pulling.txt /gluster/data1/hair-pulling.txt-dud
4. rm /brick/data1/hair-pulling.txt
5. follow some Joe Julian blog stuff to delete the "invisible fork" 
of file

6. gluster volume heal data1 all
If you note, in the above scenario _I copied from the brick to the 
mounted gluster volume_.  I believe that this forces the breaking of 
any linkage between the old file and the new one.  Am I missing 
something there?

Yep, I missed that. I seem to be suffering from split-brain, myself, today.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] ... i was able to produce a split brain...

2015-02-03 Thread Ted Miller


On 2/3/2015 12:23 PM, Joe Julian wrote:



That brought another thought to mind (have not had reason to try it):
How does gluster cope if you go behind its back and rename a "rejected" 
file?  For instance, in my example above, what if I go directly on the 
brick and rename the host-2 copy of the file to hair-pulling.txt-dud?  The 
ideal scenario would seem to be that if user does a heal it would treat 
the copy as new file, see no dupe for hair-pulling.txt, and create a new 
dupe on host-2.  Since hair-pulling.txt-dud is also a new file, a dupe 
would be created on host-1.  User could then access files, verify 
correctness, and then delete hair-pulling.txt-dud.
This should cause you to have two files with the same gfid. This will 
create the hardlink in .glusterfs again, and the heal will then re-create 
the .txt file also with that same gfid. Since both files will have the same 
gfid (stored in extended attributes) and be hard linked to the same file 
under .glusterfs you should then end up with both files being split-brain. 

Joe, I moved you comments up to be closest to the proposal they seem relevant 
to.


*
A not-officially-sanctioned way that I dealt with a split-brain a few 
versions back:

1. decided I wanted to keep file on host-2
2. log onto host-2
3. cp /brick/data1/hair-pulling.txt /gluster/data1/hair-pulling.txt-dud
4. rm /brick/data1/hair-pulling.txt
5. follow some Joe Julian blog stuff to delete the "invisible fork" of file
6. gluster volume heal data1 all
If you note, in the above scenario I copied from the brick to the mounted 
gluster volume.  I believe that this forces the breaking of any linkage 
between the old file and the new one.  Am I missing something there?


Ted Miller
Elkhart, IN

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] ... i was able to produce a split brain...

2015-02-03 Thread Joe Julian





That brought another thought to mind (have not had reason to try it):
How does gluster cope if you go behind its back and rename a 
"rejected" file?  For instance, in my example above, what if I go 
directly on the brick and rename the host-2 copy of the file to 
hair-pulling.txt-dud?  The ideal scenario would seem to be that if 
user does a heal it would treat the copy as new file, see no dupe for 
hair-pulling.txt, and create a new dupe on host-2.  Since 
hair-pulling.txt-dud is also a new file, a dupe would be created on 
host-1.  User could then access files, verify correctness, and then 
delete hair-pulling.txt-dud.


*
A not-officially-sanctioned way that I dealt with a split-brain a few 
versions back:

1. decided I wanted to keep file on host-2
2. log onto host-2
3. cp /brick/data1/hair-pulling.txt /gluster/data1/hair-pulling.txt-dud
4. rm /brick/data1/hair-pulling.txt
5. follow some Joe Julian blog stuff to delete the "invisible fork" of 
file

6. gluster volume heal data1 all


This should cause you to have two files with the same gfid. This will 
create the hardlink in .glusterfs again, and the heal will then 
re-create the .txt file also with that same gfid. Since both files will 
have the same gfid (stored in extended attributes) and be hard linked to 
the same file under .glusterfs you should then end up with both files 
being split-brain.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] ... i was able to produce a split brain...

2015-02-03 Thread Ted Miller


On 1/31/2015 12:47 AM, Pranith Kumar Karampuri wrote:


On 01/30/2015 06:28 PM, Jeff Darcy wrote:

Pranith and I had a discussion regarding this issue and here is what we have
in our mind right now.

We plan to provide the user commands to execute from mount so that he can
access the files in split-brain. This way he can choose which copy is to be
used as source. The user will have to perform a set of getfattrs and
setfattrs (on virtual xattrs) to decide which child to choose as source and
inform AFR with his decision.

A) To know the split-brain status :
getfattr -n trusted.afr.split-brain-status 

This will provide user with the following details -
1) Whether the file is in metadata split-brain
2) Whether the file is in data split-brain

It will also list the name of afr-children to choose from. Something like :
Option0: client-0
Option1: client-1

We also tell the user what the user could do to view metadata/data info; 
like

stat to get metadata etc.

B) Now the user has to choose one of the options (client-x/client-y..) to
inspect the files.
e.g., setfattr -n trusted.afr.split-brain-choice -v client-0 
We save the read-child info in inode-ctx in order to provide the user access
to the file in split-brain from that child. Once the user inspects the file,
he proceeds to do the same from the other child of replica pair and makes an
informed decision.

C) Once the above steps are done, AFR is to be informed with the final 
choice

for source. This is achieved by -
(say the fresh copy is in client-0)
e.g., setfattr -n trusted.afr.split-brain-heal-finalize -v client-0

This child will be chosen as source and split-brain resolution will be done.
May I suggest another possible way to get around the difficulty in 
determining which of the files is the one to keep?


What if each of the files were to be renamed by appending the name of the 
brick-host that it lives on?

For example, in a replica 2 system:
brick-1: data1
host-1: host1
brick-2: date1
host-2: host2
file name: hair-pulling.txt

after running script/command to resolve split-brain, file system would have 
two files:

hair-pulling.txt__host-1_data1
hair-pulling.txt__host-2_data1

the user would then delete the unwanted file and rename the wanted file back 
to hair-pulling.txt.


The only problem would come with a very large file with a large number of 
replicas (like the replica 5 system I am working with). You might run out of 
space for all the copies.


Otherwise, this seems (to me) to present a user-friendly way to do this.  If 
the user has doubts (and disk space), user can choose to keep the rejected 
file around for a while, "just in case" it happens to have something useful 
in it that is missing from the "accepted" file.



That brought another thought to mind (have not had reason to try it):
How does gluster cope if you go behind its back and rename a "rejected" 
file?  For instance, in my example above, what if I go directly on the brick 
and rename the host-2 copy of the file to hair-pulling.txt-dud?  The ideal 
scenario would seem to be that if user does a heal it would treat the copy as 
new file, see no dupe for hair-pulling.txt, and create a new dupe on host-2.  
Since hair-pulling.txt-dud is also a new file, a dupe would be created on 
host-1.  User could then access files, verify correctness, and then delete 
hair-pulling.txt-dud.


*
A not-officially-sanctioned way that I dealt with a split-brain a few 
versions back:

1. decided I wanted to keep file on host-2
2. log onto host-2
3. cp /brick/data1/hair-pulling.txt /gluster/data1/hair-pulling.txt-dud
4. rm /brick/data1/hair-pulling.txt
5. follow some Joe Julian blog stuff to delete the "invisible fork" of file
6. gluster volume heal data1 all
I believe that this did work for me at that time.  I have not had to do it on 
a recent gluster version.


Ted Miller
Elkhart, IN

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] ... i was able to produce a split brain...

2015-01-30 Thread Pranith Kumar Karampuri


On 01/30/2015 06:28 PM, Jeff Darcy wrote:

Pranith and I had a discussion regarding this issue and here is what we have
in our mind right now.

We plan to provide the user commands to execute from mount so that he can
access the files in split-brain. This way he can choose which copy is to be
used as source. The user will have to perform a set of getfattrs and
setfattrs (on virtual xattrs) to decide which child to choose as source and
inform AFR with his decision.

A) To know the split-brain status :
getfattr -n trusted.afr.split-brain-status 

This will provide user with the following details -
1) Whether the file is in metadata split-brain
2) Whether the file is in data split-brain

It will also list the name of afr-children to choose from. Something like :
Option0: client-0
Option1: client-1

We also tell the user what the user could do to view metadata/data info; like
stat to get metadata etc.

B) Now the user has to choose one of the options (client-x/client-y..) to
inspect the files.
e.g., setfattr -n trusted.afr.split-brain-choice -v client-0 
We save the read-child info in inode-ctx in order to provide the user access
to the file in split-brain from that child. Once the user inspects the file,
he proceeds to do the same from the other child of replica pair and makes an
informed decision.

C) Once the above steps are done, AFR is to be informed with the final choice
for source. This is achieved by -
(say the fresh copy is in client-0)
e.g., setfattr -n trusted.afr.split-brain-heal-finalize -v client-0

This child will be chosen as source and split-brain resolution will be done.

+1

That looks quite nice, and AFAICT shouldn't be prohibitively hard to
implement.
The only problem I see is kernel read caching which may come in the way. 
We may have to invoke fuse_invalidate if it comes in the way. We will 
find out once we implement this.


Pranith





___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] ... i was able to produce a split brain...

2015-01-30 Thread Joe Julian
Looks good! 

On January 30, 2015 4:58:35 AM PST, Jeff Darcy  wrote:
>> Pranith and I had a discussion regarding this issue and here is what
>we have
>> in our mind right now.
>> 
>> We plan to provide the user commands to execute from mount so that he
>can
>> access the files in split-brain. This way he can choose which copy is
>to be
>> used as source. The user will have to perform a set of getfattrs and
>> setfattrs (on virtual xattrs) to decide which child to choose as
>source and
>> inform AFR with his decision.
>> 
>> A) To know the split-brain status :
>> getfattr -n trusted.afr.split-brain-status 
>> 
>> This will provide user with the following details -
>> 1) Whether the file is in metadata split-brain
>> 2) Whether the file is in data split-brain
>> 
>> It will also list the name of afr-children to choose from. Something
>like :
>> Option0: client-0
>> Option1: client-1
>> 
>> We also tell the user what the user could do to view metadata/data
>info; like
>> stat to get metadata etc.
>> 
>> B) Now the user has to choose one of the options
>(client-x/client-y..) to
>> inspect the files.
>> e.g., setfattr -n trusted.afr.split-brain-choice -v client-0
>
>> We save the read-child info in inode-ctx in order to provide the user
>access
>> to the file in split-brain from that child. Once the user inspects
>the file,
>> he proceeds to do the same from the other child of replica pair and
>makes an
>> informed decision.
>> 
>> C) Once the above steps are done, AFR is to be informed with the
>final choice
>> for source. This is achieved by -
>> (say the fresh copy is in client-0)
>> e.g., setfattr -n trusted.afr.split-brain-heal-finalize -v client-0
>> 
>> This child will be chosen as source and split-brain resolution will
>be done.
>
>+1
>
>That looks quite nice, and AFAICT shouldn't be prohibitively hard to
>implement.

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] ... i was able to produce a split brain...

2015-01-30 Thread Jeff Darcy
> Pranith and I had a discussion regarding this issue and here is what we have
> in our mind right now.
> 
> We plan to provide the user commands to execute from mount so that he can
> access the files in split-brain. This way he can choose which copy is to be
> used as source. The user will have to perform a set of getfattrs and
> setfattrs (on virtual xattrs) to decide which child to choose as source and
> inform AFR with his decision.
> 
> A) To know the split-brain status :
> getfattr -n trusted.afr.split-brain-status 
> 
> This will provide user with the following details -
> 1) Whether the file is in metadata split-brain
> 2) Whether the file is in data split-brain
> 
> It will also list the name of afr-children to choose from. Something like :
> Option0: client-0
> Option1: client-1
> 
> We also tell the user what the user could do to view metadata/data info; like
> stat to get metadata etc.
> 
> B) Now the user has to choose one of the options (client-x/client-y..) to
> inspect the files.
> e.g., setfattr -n trusted.afr.split-brain-choice -v client-0 
> We save the read-child info in inode-ctx in order to provide the user access
> to the file in split-brain from that child. Once the user inspects the file,
> he proceeds to do the same from the other child of replica pair and makes an
> informed decision.
> 
> C) Once the above steps are done, AFR is to be informed with the final choice
> for source. This is achieved by -
> (say the fresh copy is in client-0)
> e.g., setfattr -n trusted.afr.split-brain-heal-finalize -v client-0
> 
> This child will be chosen as source and split-brain resolution will be done.

+1

That looks quite nice, and AFAICT shouldn't be prohibitively hard to
implement.


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] ... i was able to produce a split brain...

2015-01-30 Thread Anuradha Talur


- Original Message -
> From: "Jeff Darcy" 
> To: "Pranith Kumar Karampuri" 
> Cc: "Joe Julian" , gluster-users@gluster.org, 
> "Anuradha Talur" 
> Sent: Wednesday, January 28, 2015 9:01:15 PM
> Subject: Re: [Gluster-users] ... i was able to produce a split brain...
> 
> > On 01/27/2015 11:43 PM, Joe Julian wrote:
> > > No, there's not. I've been asking for this for years.
> > Hey Joe,
> > Vijay and I were just talking about this today. We were
> > wondering if you could give us the inputs to make it a feature to
> > implement.
> > Here are the questions I have:
> > Basic requirements if I understand correctly are as follows:
> > 1) User should be able to fix the split-brain without any intervention
> > from admin as the user knows best about the data.
> > 2) He should be able to preview some-how about the data before selecting
> > the copy which he/she wants to preserve.
> 
> One possibility would be to implement something like DHT's
> filter_loc_subvol_key, though perhaps using child indices instead of
> translator names.  Another would be a script which can manipulate
> volfiles and use GFAPI to fetch a specific version of a file.  I've
> written several scripts which can do the necessary volfile manipulation.
> If we finally have a commitment to do something like this, actually
> implementing it will be the easy part.
> 

Hello,

Pranith and I had a discussion regarding this issue and here is what we have in 
our mind right now.

We plan to provide the user commands to execute from mount so that he can 
access the files in split-brain. This way he can choose which copy is to be 
used as source. The user will have to perform a set of getfattrs and setfattrs 
(on virtual xattrs) to decide which child to choose as source and inform AFR 
with his decision.

A) To know the split-brain status :
getfattr -n trusted.afr.split-brain-status 

This will provide user with the following details -
1) Whether the file is in metadata split-brain
2) Whether the file is in data split-brain

It will also list the name of afr-children to choose from. Something like :
Option0: client-0
Option1: client-1

We also tell the user what the user could do to view metadata/data info; like 
stat to get metadata etc.

B) Now the user has to choose one of the options (client-x/client-y..) to 
inspect the files.
e.g., setfattr -n trusted.afr.split-brain-choice -v client-0 
We save the read-child info in inode-ctx in order to provide the user access to 
the file in split-brain from that child. Once the user inspects the file, he 
proceeds to do the same from the other child of replica pair and makes an 
informed decision.

C) Once the above steps are done, AFR is to be informed with the final choice 
for source. This is achieved by -
(say the fresh copy is in client-0)
e.g., setfattr -n trusted.afr.split-brain-heal-finalize -v client-0 

This child will be chosen as source and split-brain resolution will be done.

-- 
Thanks,
Anuradha.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] ... i was able to produce a split brain...

2015-01-29 Thread Ml Ml
Hm. okay. Then the split-brain seems to have been resolved somehow. thanks.

However a user friendly way of resolving split brains whould be great
(like Jeff suggested).

Thanks a lot so far.

Mario


On Thu, Jan 29, 2015 at 5:19 AM, Ravishankar N  wrote:
>
> On 01/28/2015 10:58 PM, Ml Ml wrote:
>>
>> "/1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids" is a binary file.
>>
>>
>> Here is the output of gluster volume info:
>>
>> --
>>
>>
>> [root@ovirt-node03 ~]# gluster volume info
>>
>> Volume Name: RaidVolB
>> Type: Replicate
>> Volume ID: e952fd41-45bf-42d9-b494-8e0195cb9756
>> Status: Started
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: ovirt-node03.example.local:/raidvol/volb/brick
>> Brick2: ovirt-node04.example.local:/raidvol/volb/brick
>> Options Reconfigured:
>> storage.owner-gid: 36
>> storage.owner-uid: 36
>> network.remote-dio: enable
>> cluster.eager-lock: enable
>> performance.stat-prefetch: off
>> performance.io-cache: off
>> performance.read-ahead: off
>> performance.quick-read: off
>> auth.allow: *
>> user.cifs: disable
>> nfs.disable: on
>>
>>
>>
>>
>> [root@ovirt-node04 ~]#  gluster volume info
>>
>> Volume Name: RaidVolB
>> Type: Replicate
>> Volume ID: e952fd41-45bf-42d9-b494-8e0195cb9756
>> Status: Started
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: ovirt-node03.example.local:/raidvol/volb/brick
>> Brick2: ovirt-node04.example.local:/raidvol/volb/brick
>> Options Reconfigured:
>> nfs.disable: on
>> user.cifs: disable
>> auth.allow: *
>> performance.quick-read: off
>> performance.read-ahead: off
>> performance.io-cache: off
>> performance.stat-prefetch: off
>> cluster.eager-lock: enable
>> network.remote-dio: enable
>> storage.owner-uid: 36
>> storage.owner-gid: 36
>>
>>
>> Here is the getfattr command in node03 and node 04:
>>
>> --
>>
>>
>>   getfattr -d -m . -e hex
>> /raidvol/volb/brick//1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
>> getfattr: Removing leading '/' from absolute path names
>> # file:
>> raidvol/volb/brick//1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
>> trusted.afr.RaidVolB-client-0=0x
>> trusted.afr.RaidVolB-client-1=0x
>> trusted.afr.dirty=0x
>> trusted.gfid=0x1c15d0cb1cca4627841c395f7b712f73
>>
>>
>>
>> [root@ovirt-node04 ~]# getfattr -d -m . -e hex
>> /raidvol/volb/brick//1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
>> getfattr: Removing leading '/' from absolute path names
>> # file:
>> raidvol/volb/brick//1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
>> trusted.afr.RaidVolB-client-0=0x
>> trusted.afr.RaidVolB-client-1=0x
>> trusted.afr.dirty=0x
>> trusted.gfid=0x1c15d0cb1cca4627841c395f7b712f73
>>
>
> These xattrs seem to indicate there is no split-brain for the file,
> heal-info also shows 0 entries on both bricks.
> Are you getting I/O error when you read
> "1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids" from the mount?
> If yes, is there a difference in file size on both nodes? How about the
> contents (check if md5sum is same)?
>
>> Am i supposed to run those commands on the mounted brick?:
>>
>> --
>> 127.0.0.1:RaidVolB on
>> /rhev/data-center/mnt/glusterSD/127.0.0.1:RaidVolB type fuse.glusterfs
>> (rw,default_permissions,allow_other,max_read=131072)
>>
>>
>> At the very beginning i thought i removed the file with "rm
>> /raidvol/volb/brick//1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids"
>> hoping gluster would then fix itself somehow :)
>> It was gone but it seems to be here again. Dunno if this is any help.
>>
>>
>> Here is gluster volume heal RaidVolB info on both nodes:
>>
>> --
>>
>> [root@ovirt-node03 ~]#  gluster volume heal RaidVolB info
>> Brick ovirt-node03.example.local:/raidvol/volb/brick/
>> Number of entries: 0
>>
>> Brick ovirt-node04.example.local:/raidvol/volb/brick/
>> Number of entries: 0
>>
>>
>> [root@ovirt-node04 ~]#  gluster volume heal RaidVolB info
>> Brick ovirt-node03.example.local:/raidvol/volb/brick/
>> Number of entries: 0
>>
>> Brick ovirt-node04.example.local:/raidvol/volb/brick/
>> Number of entries: 0
>>
>>
>> Thanks a lot,
>> Mario
>>
>>
>>
>> On Wed, Jan 28, 2015 at 4:57 PM, Ravishankar N 
>> wrote:
>>>
>>> On 01/28/2015 08:34 PM, Ml Ml wrote:

 Hello Ravi,

 thanks a lot for your reply.

 The Data on ovirt-node03 is the one which i want.

 Here are the infos collected by following the howto:


 https://github.com/GlusterFS/glusterfs/blob/master/doc/debugging/split-brain.md



 [root@ovirt-node03 ~]# gluster volume

Re: [Gluster-users] ... i was able to produce a split brain...

2015-01-28 Thread Ravishankar N


On 01/28/2015 10:58 PM, Ml Ml wrote:

"/1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids" is a binary file.


Here is the output of gluster volume info:
--


[root@ovirt-node03 ~]# gluster volume info

Volume Name: RaidVolB
Type: Replicate
Volume ID: e952fd41-45bf-42d9-b494-8e0195cb9756
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: ovirt-node03.example.local:/raidvol/volb/brick
Brick2: ovirt-node04.example.local:/raidvol/volb/brick
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
auth.allow: *
user.cifs: disable
nfs.disable: on




[root@ovirt-node04 ~]#  gluster volume info

Volume Name: RaidVolB
Type: Replicate
Volume ID: e952fd41-45bf-42d9-b494-8e0195cb9756
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: ovirt-node03.example.local:/raidvol/volb/brick
Brick2: ovirt-node04.example.local:/raidvol/volb/brick
Options Reconfigured:
nfs.disable: on
user.cifs: disable
auth.allow: *
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: enable
storage.owner-uid: 36
storage.owner-gid: 36


Here is the getfattr command in node03 and node 04:
--


  getfattr -d -m . -e hex
/raidvol/volb/brick//1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
getfattr: Removing leading '/' from absolute path names
# file: raidvol/volb/brick//1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
trusted.afr.RaidVolB-client-0=0x
trusted.afr.RaidVolB-client-1=0x
trusted.afr.dirty=0x
trusted.gfid=0x1c15d0cb1cca4627841c395f7b712f73



[root@ovirt-node04 ~]# getfattr -d -m . -e hex
/raidvol/volb/brick//1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
getfattr: Removing leading '/' from absolute path names
# file: raidvol/volb/brick//1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
trusted.afr.RaidVolB-client-0=0x
trusted.afr.RaidVolB-client-1=0x
trusted.afr.dirty=0x
trusted.gfid=0x1c15d0cb1cca4627841c395f7b712f73



These xattrs seem to indicate there is no split-brain for the file, 
heal-info also shows 0 entries on both bricks.
Are you getting I/O error when you read 
"1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids" from the mount?
If yes, is there a difference in file size on both nodes? How about the 
contents (check if md5sum is same)?

Am i supposed to run those commands on the mounted brick?:
--
127.0.0.1:RaidVolB on
/rhev/data-center/mnt/glusterSD/127.0.0.1:RaidVolB type fuse.glusterfs
(rw,default_permissions,allow_other,max_read=131072)


At the very beginning i thought i removed the file with "rm
/raidvol/volb/brick//1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids"
hoping gluster would then fix itself somehow :)
It was gone but it seems to be here again. Dunno if this is any help.


Here is gluster volume heal RaidVolB info on both nodes:
--

[root@ovirt-node03 ~]#  gluster volume heal RaidVolB info
Brick ovirt-node03.example.local:/raidvol/volb/brick/
Number of entries: 0

Brick ovirt-node04.example.local:/raidvol/volb/brick/
Number of entries: 0


[root@ovirt-node04 ~]#  gluster volume heal RaidVolB info
Brick ovirt-node03.example.local:/raidvol/volb/brick/
Number of entries: 0

Brick ovirt-node04.example.local:/raidvol/volb/brick/
Number of entries: 0


Thanks a lot,
Mario



On Wed, Jan 28, 2015 at 4:57 PM, Ravishankar N  wrote:

On 01/28/2015 08:34 PM, Ml Ml wrote:

Hello Ravi,

thanks a lot for your reply.

The Data on ovirt-node03 is the one which i want.

Here are the infos collected by following the howto:

https://github.com/GlusterFS/glusterfs/blob/master/doc/debugging/split-brain.md



[root@ovirt-node03 ~]# gluster volume heal RaidVolB info split-brain
Gathering list of split brain entries on volume RaidVolB has been
successful

Brick ovirt-node03.example.local:/raidvol/volb/brick
Number of entries: 0

Brick ovirt-node04.example.local:/raidvol/volb/brick
Number of entries: 14
atpath on brick
---
2015-01-27 17:33:00 
2015-01-27 17:34:01 
2015-01-27 17:35:04 
2015-01-27 17:36:05 /ids
2015-01-27 17:37:06 /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
2015-01-27 17:37:07 /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
2015-01-27 17:38:08 /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
2015-01-27 17:38:21 /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids

Re: [Gluster-users] ... i was able to produce a split brain...

2015-01-28 Thread Ml Ml
"/1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids" is a binary file.


Here is the output of gluster volume info:
--


[root@ovirt-node03 ~]# gluster volume info

Volume Name: RaidVolB
Type: Replicate
Volume ID: e952fd41-45bf-42d9-b494-8e0195cb9756
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: ovirt-node03.example.local:/raidvol/volb/brick
Brick2: ovirt-node04.example.local:/raidvol/volb/brick
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
auth.allow: *
user.cifs: disable
nfs.disable: on




[root@ovirt-node04 ~]#  gluster volume info

Volume Name: RaidVolB
Type: Replicate
Volume ID: e952fd41-45bf-42d9-b494-8e0195cb9756
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: ovirt-node03.example.local:/raidvol/volb/brick
Brick2: ovirt-node04.example.local:/raidvol/volb/brick
Options Reconfigured:
nfs.disable: on
user.cifs: disable
auth.allow: *
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: enable
storage.owner-uid: 36
storage.owner-gid: 36


Here is the getfattr command in node03 and node 04:
--


 getfattr -d -m . -e hex
/raidvol/volb/brick//1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
getfattr: Removing leading '/' from absolute path names
# file: raidvol/volb/brick//1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
trusted.afr.RaidVolB-client-0=0x
trusted.afr.RaidVolB-client-1=0x
trusted.afr.dirty=0x
trusted.gfid=0x1c15d0cb1cca4627841c395f7b712f73



[root@ovirt-node04 ~]# getfattr -d -m . -e hex
/raidvol/volb/brick//1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
getfattr: Removing leading '/' from absolute path names
# file: raidvol/volb/brick//1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
trusted.afr.RaidVolB-client-0=0x
trusted.afr.RaidVolB-client-1=0x
trusted.afr.dirty=0x
trusted.gfid=0x1c15d0cb1cca4627841c395f7b712f73


Am i supposed to run those commands on the mounted brick?:
--
127.0.0.1:RaidVolB on
/rhev/data-center/mnt/glusterSD/127.0.0.1:RaidVolB type fuse.glusterfs
(rw,default_permissions,allow_other,max_read=131072)


At the very beginning i thought i removed the file with "rm
/raidvol/volb/brick//1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids"
hoping gluster would then fix itself somehow :)
It was gone but it seems to be here again. Dunno if this is any help.


Here is gluster volume heal RaidVolB info on both nodes:
--

[root@ovirt-node03 ~]#  gluster volume heal RaidVolB info
Brick ovirt-node03.example.local:/raidvol/volb/brick/
Number of entries: 0

Brick ovirt-node04.example.local:/raidvol/volb/brick/
Number of entries: 0


[root@ovirt-node04 ~]#  gluster volume heal RaidVolB info
Brick ovirt-node03.example.local:/raidvol/volb/brick/
Number of entries: 0

Brick ovirt-node04.example.local:/raidvol/volb/brick/
Number of entries: 0


Thanks a lot,
Mario



On Wed, Jan 28, 2015 at 4:57 PM, Ravishankar N  wrote:
>
> On 01/28/2015 08:34 PM, Ml Ml wrote:
>>
>> Hello Ravi,
>>
>> thanks a lot for your reply.
>>
>> The Data on ovirt-node03 is the one which i want.
>>
>> Here are the infos collected by following the howto:
>>
>> https://github.com/GlusterFS/glusterfs/blob/master/doc/debugging/split-brain.md
>>
>>
>>
>> [root@ovirt-node03 ~]# gluster volume heal RaidVolB info split-brain
>> Gathering list of split brain entries on volume RaidVolB has been
>> successful
>>
>> Brick ovirt-node03.example.local:/raidvol/volb/brick
>> Number of entries: 0
>>
>> Brick ovirt-node04.example.local:/raidvol/volb/brick
>> Number of entries: 14
>> atpath on brick
>> ---
>> 2015-01-27 17:33:00 
>> 2015-01-27 17:34:01 
>> 2015-01-27 17:35:04 
>> 2015-01-27 17:36:05 /ids
>> 2015-01-27 17:37:06 /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
>> 2015-01-27 17:37:07 /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
>> 2015-01-27 17:38:08 /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
>> 2015-01-27 17:38:21 /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
>> 2015-01-27 17:39:22 /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
>> 2015-01-27 17:40:23 /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
>> 2015-01-27 17:41:24 /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
>> 2015-01-27 17:42:25 /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/id

Re: [Gluster-users] ... i was able to produce a split brain...

2015-01-28 Thread Ravishankar N


On 01/28/2015 08:34 PM, Ml Ml wrote:

Hello Ravi,

thanks a lot for your reply.

The Data on ovirt-node03 is the one which i want.

Here are the infos collected by following the howto:
https://github.com/GlusterFS/glusterfs/blob/master/doc/debugging/split-brain.md



[root@ovirt-node03 ~]# gluster volume heal RaidVolB info split-brain
Gathering list of split brain entries on volume RaidVolB has been successful

Brick ovirt-node03.example.local:/raidvol/volb/brick
Number of entries: 0

Brick ovirt-node04.example.local:/raidvol/volb/brick
Number of entries: 14
atpath on brick
---
2015-01-27 17:33:00 
2015-01-27 17:34:01 
2015-01-27 17:35:04 
2015-01-27 17:36:05 /ids
2015-01-27 17:37:06 /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
2015-01-27 17:37:07 /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
2015-01-27 17:38:08 /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
2015-01-27 17:38:21 /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
2015-01-27 17:39:22 /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
2015-01-27 17:40:23 /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
2015-01-27 17:41:24 /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
2015-01-27 17:42:25 /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
2015-01-27 17:43:26 /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
2015-01-27 17:44:27 /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids

[root@ovirt-node03 ~]# gluster volume heal RaidVolB info
Brick ovirt-node03.example.local:/raidvol/volb/brick/
Number of entries: 0

Brick ovirt-node04.example.local:/raidvol/volb/brick/
Number of entries: 0


Hi Mario,
Is "/1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids" a file or a directory?
Whatever it is, it should be shown in the output of heal info /heal info 
split-brain command of both nodes. But I see it being listed only under 
node03.

Also, heal info is showing zero entries for both nodes which is strange.

Are node03 and node04 bricks of the same replica pair? Can you share 
'gluster volume info` of RaidVolB?
How did you infer that there is a split-brain? Does accessing the 
file(s) from the mount give input/output error?




[root@ovirt-node03 ~]# getfattr -d -m . -e hex
/raidvol/volb/brick/1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
getfattr: Removing leading '/' from absolute path names
# file: raidvol/volb/brick/1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
trusted.afr.RaidVolB-client-0=0x
trusted.afr.RaidVolB-client-1=0x
trusted.afr.dirty=0x
trusted.gfid=0x1c15d0cb1cca4627841c395f7b712f73
What is the getfattr output of this file on the other brick? The afr 
specific xattrs being all zeros certainly don't indicate the possibility 
of a split-brain


The "Resetting the relevant changelogs to resolve the split-brain: "
part of the howto is now a little complictaed. Do i have a data or
meta split brain now?
I guess i have a data split brain in my case, right?

What are my next setfattr commands nowin my case if i want to keep the
data from node03?

Thanks a lot!

Mario


On Wed, Jan 28, 2015 at 9:44 AM, Ravishankar N  wrote:

On 01/28/2015 02:02 PM, Ml Ml wrote:

I want to either take the file from node03 or node04. i really don’t

mind. Can i not just tell gluster that it should use one node as the
„current“ one?

Policy based split-brain resolution [1] which does just that, has been
merged in master and should be available in glusterfs 3.7.
For the moment, you would have to modify the xattrs on the one of the bricks
and trigger heal. You can see
https://github.com/GlusterFS/glusterfs/blob/master/doc/debugging/split-brain.md
on how to do it.

Hope this helps,
Ravi

[1] http://review.gluster.org/#/c/9377/



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] ... i was able to produce a split brain...

2015-01-28 Thread Jeff Darcy
> On 01/27/2015 11:43 PM, Joe Julian wrote:
> > No, there's not. I've been asking for this for years.
> Hey Joe,
> Vijay and I were just talking about this today. We were
> wondering if you could give us the inputs to make it a feature to implement.
> Here are the questions I have:
> Basic requirements if I understand correctly are as follows:
> 1) User should be able to fix the split-brain without any intervention
> from admin as the user knows best about the data.
> 2) He should be able to preview some-how about the data before selecting
> the copy which he/she wants to preserve.

One possibility would be to implement something like DHT's
filter_loc_subvol_key, though perhaps using child indices instead of
translator names.  Another would be a script which can manipulate
volfiles and use GFAPI to fetch a specific version of a file.  I've
written several scripts which can do the necessary volfile manipulation.
If we finally have a commitment to do something like this, actually
implementing it will be the easy part.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] ... i was able to produce a split brain...

2015-01-28 Thread Ml Ml
Hello Ravi,

thanks a lot for your reply.

The Data on ovirt-node03 is the one which i want.

Here are the infos collected by following the howto:
https://github.com/GlusterFS/glusterfs/blob/master/doc/debugging/split-brain.md



[root@ovirt-node03 ~]# gluster volume heal RaidVolB info split-brain
Gathering list of split brain entries on volume RaidVolB has been successful

Brick ovirt-node03.example.local:/raidvol/volb/brick
Number of entries: 0

Brick ovirt-node04.example.local:/raidvol/volb/brick
Number of entries: 14
atpath on brick
---
2015-01-27 17:33:00 
2015-01-27 17:34:01 
2015-01-27 17:35:04 
2015-01-27 17:36:05 /ids
2015-01-27 17:37:06 /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
2015-01-27 17:37:07 /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
2015-01-27 17:38:08 /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
2015-01-27 17:38:21 /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
2015-01-27 17:39:22 /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
2015-01-27 17:40:23 /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
2015-01-27 17:41:24 /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
2015-01-27 17:42:25 /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
2015-01-27 17:43:26 /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
2015-01-27 17:44:27 /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids

[root@ovirt-node03 ~]# gluster volume heal RaidVolB info
Brick ovirt-node03.example.local:/raidvol/volb/brick/
Number of entries: 0

Brick ovirt-node04.example.local:/raidvol/volb/brick/
Number of entries: 0


[root@ovirt-node03 ~]# getfattr -d -m . -e hex
/raidvol/volb/brick/1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
getfattr: Removing leading '/' from absolute path names
# file: raidvol/volb/brick/1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
trusted.afr.RaidVolB-client-0=0x
trusted.afr.RaidVolB-client-1=0x
trusted.afr.dirty=0x
trusted.gfid=0x1c15d0cb1cca4627841c395f7b712f73


The "Resetting the relevant changelogs to resolve the split-brain: "
part of the howto is now a little complictaed. Do i have a data or
meta split brain now?
I guess i have a data split brain in my case, right?

What are my next setfattr commands nowin my case if i want to keep the
data from node03?

Thanks a lot!

Mario


On Wed, Jan 28, 2015 at 9:44 AM, Ravishankar N  wrote:
>
> On 01/28/2015 02:02 PM, Ml Ml wrote:
>>
>> I want to either take the file from node03 or node04. i really don’t
>> >mind. Can i not just tell gluster that it should use one node as the
>> >„current“ one?
>
> Policy based split-brain resolution [1] which does just that, has been
> merged in master and should be available in glusterfs 3.7.
> For the moment, you would have to modify the xattrs on the one of the bricks
> and trigger heal. You can see
> https://github.com/GlusterFS/glusterfs/blob/master/doc/debugging/split-brain.md
> on how to do it.
>
> Hope this helps,
> Ravi
>
> [1] http://review.gluster.org/#/c/9377/
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] ... i was able to produce a split brain...

2015-01-28 Thread Pranith Kumar Karampuri


On 01/27/2015 11:43 PM, Joe Julian wrote:

No, there's not. I've been asking for this for years.

Hey Joe,
   Vijay and I were just talking about this today. We were 
wondering if you could give us the inputs to make it a feature to implement.

Here are the questions I have:
Basic requirements if I understand correctly are as follows:
1) User should be able to fix the split-brain without any intervention 
from admin as the user knows best about the data.
2) He should be able to preview some-how about the data before selecting 
the copy which he/she wants to preserve.


What I am not able to figure out is: How do we give the ability to 
inspect the file by user.

Any inputs here would be greatly appreciated.

Pranith


On 01/27/2015 10:09 AM, Ml Ml wrote:

Hello List,

i was able to produce a split brain:

[root@ovirt-node03 splitmount]# gluster volume heal RaidVolB info
Brick ovirt-node03.example.local:/raidvol/volb/brick/

Number of entries: 1

Brick ovirt-node04.example.local:/raidvol/volb/brick/
/1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
Number of entries: 1




I want to either take the file from node03 or node04. i really don’t
mind. Can i not just tell gluster that it should use one node as the
„current“ one?

Like with DRBD: drbdadm connect --discard-my-data 

Is there a similar way with gluster?



Thanks,
Mario

# rpm -qa | grep gluster
---
glusterfs-fuse-3.6.2-1.el6.x86_64
glusterfs-server-3.6.2-1.el6.x86_64
glusterfs-libs-3.6.2-1.el6.x86_64
glusterfs-3.6.2-1.el6.x86_64
glusterfs-cli-3.6.2-1.el6.x86_64
glusterfs-rdma-3.6.2-1.el6.x86_64
vdsm-gluster-4.14.6-0.el6.noarch
glusterfs-api-3.6.2-1.el6.x86_64
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] ... i was able to produce a split brain...

2015-01-28 Thread Ravishankar N


On 01/28/2015 02:02 PM, Ml Ml wrote:

I want to either take the file from node03 or node04. i really don’t
>mind. Can i not just tell gluster that it should use one node as the
>„current“ one?
Policy based split-brain resolution [1] which does just that, has been 
merged in master and should be available in glusterfs 3.7.
For the moment, you would have to modify the xattrs on the one of the 
bricks and trigger heal. You can see 
https://github.com/GlusterFS/glusterfs/blob/master/doc/debugging/split-brain.md 


on how to do it.

Hope this helps,
Ravi

[1] http://review.gluster.org/#/c/9377/

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] ... i was able to produce a split brain...

2015-01-28 Thread Ml Ml
Can anyone help me here please?

On Tue, Jan 27, 2015 at 7:09 PM, Ml Ml  wrote:
> Hello List,
>
> i was able to produce a split brain:
>
> [root@ovirt-node03 splitmount]# gluster volume heal RaidVolB info
> Brick ovirt-node03.example.local:/raidvol/volb/brick/
> 
> Number of entries: 1
>
> Brick ovirt-node04.example.local:/raidvol/volb/brick/
> /1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
> Number of entries: 1
>
>
>
>
> I want to either take the file from node03 or node04. i really don’t
> mind. Can i not just tell gluster that it should use one node as the
> „current“ one?
>
> Like with DRBD: drbdadm connect --discard-my-data 
>
> Is there a similar way with gluster?
>
>
>
> Thanks,
> Mario
>
> # rpm -qa | grep gluster
> ---
> glusterfs-fuse-3.6.2-1.el6.x86_64
> glusterfs-server-3.6.2-1.el6.x86_64
> glusterfs-libs-3.6.2-1.el6.x86_64
> glusterfs-3.6.2-1.el6.x86_64
> glusterfs-cli-3.6.2-1.el6.x86_64
> glusterfs-rdma-3.6.2-1.el6.x86_64
> vdsm-gluster-4.14.6-0.el6.noarch
> glusterfs-api-3.6.2-1.el6.x86_64
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] ... i was able to produce a split brain...

2015-01-27 Thread Joe Julian

No, there's not. I've been asking for this for years.

On 01/27/2015 10:09 AM, Ml Ml wrote:

Hello List,

i was able to produce a split brain:

[root@ovirt-node03 splitmount]# gluster volume heal RaidVolB info
Brick ovirt-node03.example.local:/raidvol/volb/brick/

Number of entries: 1

Brick ovirt-node04.example.local:/raidvol/volb/brick/
/1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
Number of entries: 1




I want to either take the file from node03 or node04. i really don’t
mind. Can i not just tell gluster that it should use one node as the
„current“ one?

Like with DRBD: drbdadm connect --discard-my-data 

Is there a similar way with gluster?



Thanks,
Mario

# rpm -qa | grep gluster
---
glusterfs-fuse-3.6.2-1.el6.x86_64
glusterfs-server-3.6.2-1.el6.x86_64
glusterfs-libs-3.6.2-1.el6.x86_64
glusterfs-3.6.2-1.el6.x86_64
glusterfs-cli-3.6.2-1.el6.x86_64
glusterfs-rdma-3.6.2-1.el6.x86_64
vdsm-gluster-4.14.6-0.el6.noarch
glusterfs-api-3.6.2-1.el6.x86_64
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users