Hello,
I doesn't use version 3.7 in production.
I think need try new options (I don't know how it improve work with big
files but it should improve work with small files):
gluster v set prodcmsroot client.event-threads 4
gluster v set prodcmsroot server.event-threads 4
gluster v set prodcmsroot
0/14/2015 07:02 PM, Игорь Бирюлин wrote:
>
> Hello,
> today in my 2 nodes replica set I've found split-brain. Command 'ls' start
> told 'Input/output error'.
>
>
> What does the mount log (/var/log/glusterfs/.log) say when
> you get this error?
>
> Can you run
' in glusterfs meaning?
Best regards,
Igor
2015-10-14 20:13 GMT+03:00 Ravishankar N <ravishan...@redhat.com>:
>
>
> On 10/14/2015 10:05 PM, Игорь Бирюлин wrote:
>
> Thanks for your replay.
>
> If I do listing in mount point (/repo):
> # ls /repo/xxx/keyrings/debian-keyring
Hello,
today in my 2 nodes replica set I've found split-brain. Command 'ls' start
told 'Input/output error'.
But command 'gluster v heal VOLNAME info split-brain' does not show problem
files:
# gluster v heal repofiles info split-brain
Brick dist-int-master03.xxx:/storage/gluster_brick_repofiles
Hello,
I've installed ssl/tls by this documentation:
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/ch08s03.html
Hello,
indeed restarting self-heal-deamon have solved my problem.
Thank you very much!
Best regards,
Igor
2015-09-10 11:30 GMT+03:00 Jesper Led Lauridsen TS Infra server :
>
>
>
>
> *Fra:* gluster-users-boun...@gluster.org [mailto:
> gluster-users-boun...@gluster.org] *På vegne
Hello all.
Today I had got a split brain on 2 node replica set installation.
I have solved problem by removing files from bricks.
find /storage/gluster_brick_repo -samefile
/storage/gluster_brick_repo/.glusterfs/f7/bd/f7bdbdab-8dda-498f-ab03-6dcdfa2ed435
-delete -print
But in output of "gluster v
I have enabled ssl\tls on gluster.
After that I cannot check heal status.
# gluster volume heal repofiles info
repofiles: Not able to fetch volfile from glusterd
Volume heal failed
#
In log I see:
[2015-08-25 20:22:21.745210] E [rpc-clnt.c:362:saved_frames_unwind] (--
It look like split brain.
Check:
gluster volume heal VOLUMENAME info
gluster volume heal VOLUMENAME info split-brain.
2015-08-21 17:06 GMT+03:00 Mathieu Chateau mathieu.chat...@lotp.fr:
Hello,
Just in case, Did you create and test from the client (and not locally
on any brick)?
Envoyé de
Hello,
I am testing SSL/TLS support in gluster.
I have gluster replica set with 2 node:
# gluster volume info
Volume Name: repofiles
Type: Replicate
Volume ID: 4b0e2a74-f1ca-4fe7-8518-23919e1b5fa0
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1:
I have studied information on page:
https://github.com/gluster/glusterfs/blob/master/doc/features/heal-info-and-split-brain-resolution.md
and cannot solve split-brain by this instruction.
I have tested it on gluster 3.6 and it doesn't work, only on gluster 3.7.
I try to use on gluster 3.7.2.
I
/022336.html
On 07/15/2015 05:06 PM, Игорь Бирюлин wrote:
Hello,
I have made split brain special for test (file /1 in output).
And check with command:
[14:20:10] root@xxx04:/repo # gluster v heal repofiles info
Brick xxx03:/storage/gluster_brick_repofiles/
/ - Is in split-brain
/1
Thank you very much for your detailed description.
I've understood how use glusterfs in this situation.
2015-06-19 4:41 GMT+03:00 Ravishankar N ravishan...@redhat.com:
On 06/19/2015 01:06 AM, Игорь Бирюлин wrote:
Is it a bug?
How can I understand that volume stopped if in gluster volume
!
Thank you very much for your advice!
But why does gluster volume info show that my volname started before
gluster volume start volname force?
2015-06-18 14:18 GMT+03:00 Ravishankar N ravishan...@redhat.com:
On 06/18/2015 04:25 PM, Игорь Бирюлин wrote:
Thank you for you answer!
I
Is it a bug?
How can I understand that volume stopped if in gluster volume info I see
Status: Started?
2015-06-18 22:07 GMT+03:00 Atin Mukherjee atin.mukherje...@gmail.com:
Sent from one plus one
On Jun 18, 2015 8:51 PM, Игорь Бирюлин biryul...@gmail.com wrote:
Sorry, I didn't check
Hello.
I have installation with 2 servers and one volume with type
Replicate.
Volume mounted on these 2 server
too.
If we turn off one server, another will be work and mounted volume will be
use without problem.
But if we rebooted our another server, when first was turned off (or
gluster was
16 matches
Mail list logo